Bicycle Helmet Safety Institute

Consumer-funded, volunteer staff

Helmets Children Promotions Statistics Search

Virginia Tech's STAR helmet ratings

Summary: Virginia Tech University has developed a methodology for testing helmets for their ability to reduce concussions. After football and hockey helmets, they launched ratings for bicycle helmets in June of 2018. We have some reservations about their methodology, but strongly support the concept of trying to test and rank helmets for low-level impact performance. We have a page listing helmet models where the STAR ratings and Consumer Reports ratings concur. Recent concerns about concussions after COVID 19 infections might make the ratings relevant to those recoving from that disease.

Virginia Tech's Biomechanical Engineering department has been involved in concussion research in football for a number of years, outfitting teams with sensors to detect concussion-level hits and developing a very large database of hits and diagnoses. They developed a program called STAR ratings based on that research, rating football helmets on their ability to prevent concussions. Although there is no agreement among helmet and concussion experts that the ratings are based on exact concussion parameters, we think they represent a valid attempt to use lab testing to highlight concussion-level performance.

After football, VT took on hockey helmets. Then in 2018 they developed a program to rate bicycle helmets. It was headed up by Megan Bland, a Graduate Research Assistant and PhD Candidate at the VT Helmet Lab in their Department of Biomedical Engineering and Mechanics. She was working with Dr. Steven Rowson and Craig McNally, and the methodology is similar to the football and hockey helmet programs. Their STAR ratings page has all the details. Here is a video of the test methods that summarizes the testing. The project was funded by the Insurance Institute for Highway Safety, and they have been publicizing it.

Standards development has lagged far behind the increased concern with concussion-level impacts. The science of concussions is still developing, and standards-makers are reluctant to introduce new benchmarks until the exact anatomical parameters are clear. The resultant helmets might not be any better than today's. The VT approach is an attempt to use field data to get close enough to concussion causality to rate helmets on their ability to perform in the range of impacts that lead to concussions. For that reason we consider their program an important advance in helmet testing.

The bicycle helmet impact tests evaluated a helmet's ability to reduce linear acceleration and rotational velocity of the head from a range of impacts a cyclist might experience. The risk formula is designed to identify helmets that may provide a reduction in injury. There were 197 models at the top in June of 2024 with five star or four star ratings. Another 30 rated three stars. At the bottom were six helmets with only two stars. You can access the most current version of the ratings on the VT site for specific models.

Limitations and caveats

We are excited that these bicycle helmet STAR ratings have been published and updated. But we recognize some limitations of the VT program. A small percentage of the models on the market have been rated, all chosen by the manufacturers and sent to VT. Many of them are MIPS models. They were tested using the methodology that MIPS uses: sticky headform, tight strap, severe anvil angle, rough grippy anvil, no neck. That methodology couples the headform more tightly to the helmet than it would be in the real world. It would be expected to favor the MIPS models, unlike the Snell Foundation's research that showed no benefit from MIPS.

There is still no consensus that the Virginia Tech formula for rating helmets for concussion pinpoints the helmets that reduce concussion the best. Changes to the tested helmets to improve their scores might not result in fewer concussions in the field. But since established standards organizations have failed to respond to concussion concerns, these ratings represent a good start at testing helmets systematically for lower-level and rotational impact performance.

In addition, the VT testing uses only medium-sized helmets at ambient lab conditions. Normal standards testing uses more helmets, more impact locations and tests helmets that are cold, hot and wet. It is difficult to see how Tech drew general conclusions about the value of MIPS and about urban (skate-style) helmets needing improvement based only on their limited testing.

In 2023 evidence was published indicating that high school athletes who return to sports after a COVID 19 infection suffer concussions at an elevated rate. The VT methodology was developed before the Covid pandemic, and the reasons for the elevated concussion rate are not known, but the problem could make the VT ratings more relevant for some. We have a page up on the research.
We think consumers should take note of the STAR rankings, but use them as one decision-making factor in their buying decisions. Avoiding a concussion is important, but not as important as knowing how your helmet performs in more severe impacts that can kill. There is no program that addresses that except the few helmets that Consumer Reports tests. We look for further development of the effort by Virginia Tech and others, and hope that they will pressure traditional standards-making organizations to move forward with similar testing. We have a page listing helmet models where the STAR ratings and Consumer Reports ratings concur.

STAR applied to inexpensive helmets

Two of the VA Tech STAR system's principal developers, Megan Bland and Steven Rowson, published a paper in Traffic Injury Prevention in 2021 titled A price-performance analysis of the protective capabilities of wholesale bicycle helmets. The authors tested nine helmet models sold in bulk for helmet promotion programs at $3.65-$12.95 using the STAR protocol. The helmets came from Helmets R Us. Results: "Large ranges in kinematic results led to large variations in concussion risks between helmets, and in turn, large variations in STAR values (13.5-26.2). Wholesale helmet price was not significantly associated with STAR, although incorporating 30 previous bicycle helmet STAR results produced a weak negative correlation between price and STAR overall. Nonetheless, the best-performing wholesale helmet produced one of the lowest overall STAR values for a price of $6.45. Helmet style was instead a superior predictor of STAR, with multi-sport style helmets producing significantly higher linear accelerations and resulting STAR values than bike style helmets." (In this case, multi-sport refers to skate-style helmets.) The authors also said that "Increasing wholesale helmet price was associated with increasing STAR value, albeit not significantly" The $6.45 Model 9 helmet earned a score of 13.5, putting it in the range of 5-star helmets. The helmets are not on the current STAR listing. The Model 38 earned a 26.2 score that would be at the very bottom of the models VA Tech has listed, with one or at most two stars. The Model 9 ranked with 5-star (best) helmets, while the Model 38 ranked lowest. The 30 helmets already in the STAR ranking cost from $14-$250, and the top performers were all over $75. The authors concluded that "the present results showed that a helmet priced at $6.45 can afford the same level of protection." The authors recommend buying road helmets rather than skate-style models, since the skate helmet liners are too stiff for the lower velocity impacts most riders experience. But the skate helmets produced slightly lower rotational readings, since they are round and smooth. Keep in mind that the VA Tech testing is focussed on concussion-level impacts, not the most severe impacts. VA Tech normally tests only helmets submitted by manufacturers. If they extended their testing this way there might be thousands of 5 Star helmets to be found.

Some additional perspective

Here is a presentation that BHSI's Randy Swart did for the International Conference on Safety in Cycling in 2017. It outlines our hopes for the rating systems being developed by Virginia Tech and other universities in the UK, Europe and Australia. Despite their drawbacks, they are able to make progress when consensus is lacking for changes in traditional standards.