Trad Talk Forums banner
1 - 20 of 60 Posts

·
Civil but Disobedient
Joined
·
10,696 Posts
Discussion Starter · #1 ·
There has been a lot of talk recently regarding bow testing. There is a shortage of good test results available. That is not to say that there are not good folks out there doing tests and reviews. There is just too much equipment, and too many variations. It has been my feeling that if we want to see the data, then we need to produce it ourselves. That has been my approach for several years now. I started testing my own stuff and then went on the road with my portable test kit that remains in the trunk of my car (boot to some of you). I have also taken measurements from others and returned the results in a standard format with relevant comparisons based upon test data currently in my database. I am trying to expand my database as much as possible since I believe it is in the comparisons that we get the value. So we can spend time trying to find some standard process and person/organization for recurve testing, or we can take control ourselves, and learn something along the way. I am still accepting data from anyone that is willing to take measurements. By testing ourselves, we are sharing real data produced by folks shooting their bows. There is no perfect solution. Bows shot from mechanically released Hooter Shooters show what a bow is capable of under ideal circumstances. They do not represent what you will see in the field. We need to learn how to interpret the data we get, which is where the value of the discussion comes in. If we want analytics to drive our purchases, then it is up to us to create the data and understand its value and limitations.
 

·
Banned
Joined
·
548 Posts
Bows shot from mechanically released Hooter Shooters show what a bow is capable of under ideal circumstances. They do not represent what you will see in the field. We need to learn how to interpret the data we get, which is where the value of the discussion comes in. If we want analytics to drive our purchases, then it is up to us to create the data and understand its value and limitations.
I'm not sure I agree with your assessments Hank, for a number of reasons.

First. While a Hooter Shooter with a mechanical release does give a more perfect release, it also insures that the bow is drawn to the exact some length every time, something that is absolutely necessary for an accurate test.

Second, when you say that the results from a Hooter Shooter are not representative of what you will see in the field. I agree but not in the way you are implying. It all depends on the tester and the test parameters. If you're talking AMO, I would agree. That stuff isn't going to happen in the real world. If you use Blacky's tests as an example, (see attached...the bow in this example is irrelevant as the parameters are the same for all) many people will likely get better, often MUCH better results than his testing shows. How many use 9 gpp and a 16 strand string? However, having and knowing a consistent baseline is absolutely key. If I'm shooting fingers, I'm smart enough to subtract 5 or 6 fps from the published results. If I'm using a 10 strand string with 7 gpp instead of a 16 strand string with 9 gpp, I know the difference between my experience, whatever that happens to be, is going to be very consistent for both of the bows I'm comparing. Whatever I add to, or subtract from one, I simple add to, or subtract from the other. Is it perfect? No. But it's going to be very close, and for the terms of comparison, good enough.

I enjoy reading individual reports as much as the next guy, but for the purposes of comparison, they don't really mean much.

Finally, when you say:

"We need to learn how to interpret the data we get, which is where the value of the discussion comes in."

I submit to you that this is precisely where we get into most of the fluff-ups around here. When a guy like MAT (for example) wants to have a discussion about the data, how it is arrived at, or what it means, he is labeled as a "complainer" with an "agenda."

It's the consistent testing parameters, the "baseline" if you will, that removes much of the speculation, the hyperbole, and the resulting arguments.
 

Attachments

·
Registered
Joined
·
204 Posts
Long before I became involved in the production side of the archery industry I was a collector and tester of bows. Perhaps I'm not the only person who bought a Hooter Shooter to test my personal bows but that's what I did. I went to visit Norb Mullaney and seek his advice and guidance on how he tested bows for decades on behalf of Bowhunting Magazine. And then I started developing a reasonably extensive data base for all the bows I tested. I just did this for my own edification and pleasure. Being able to objectively compare one bow to another enables a person to understand what makes a bow tick.

I am a firm believer in PRECISELY measuring each and every component that effects a bow's performance. How much energy does it store? How much of that stored energy is delivered to the arrow? How fast does the arrow travel? How accurately is the Force/Draw curve developed and measured? How accurately is the weight of the arrow measured? Is the bow drawn precisely to the same point every shot? How much does the string weigh? Are carbon arrows without fletching used? What rest material is used if shooting off of a shelf?

I am also a firm believer in testing bows in a consistent manner, like Blacky does or like Norb did. Otherwise comparisons are harder to make. My belief in the above notions may not fall in line with how others feel, but having tested many different bows of many different designs the above methodology has worked well so far for me.
 

·
Victim of Geography
Joined
·
1,160 Posts
The use of a shooting machine would make it impossible for most on here to submit data. How much of a spread would there be between a good release and an average one?

Is there not some sort of formula to work out what a 9gpp arrow would shoot at if the tester is shooting an 8gpp arrow.

It just seems Hank is doing us a big favour analyzing the data, and to dismiss it seems to be a bit of a waste.
 

·
Civil but Disobedient
Joined
·
10,696 Posts
Discussion Starter · #7 ·
Sylvan,

I don't discount Hooter Shooter data at all. It represents the theoretical upper performance limit which is important to know. I will be using a shooting machine as soon as I have am able to put one together. My point is that we can either wait for someone to solve our problem, or we can take action ourselves. Blacky cannot test everything that we want to see, nor can he test in all the configurations that are important to us. There are too many permutations. So we add to the knowledge base with our own data. It's that simple. It is all one body of knowledge that we can pull on and try to understand. It is not one or the other. You can always weight Blacky's data higher than data from other sources if you trust it more.
 

·
Civil but Disobedient
Joined
·
10,696 Posts
Discussion Starter · #8 ·
Long before I became involved in the production side of the archery industry I was a collector and tester of bows. Perhaps I'm not the only person who bought a Hooter Shooter to test my personal bows but that's what I did. I went to visit Norb Mullaney and seek his advice and guidance on how he tested bows for decades on behalf of Bowhunting Magazine. And then I started developing a reasonably extensive data base for all the bows I tested. I just did this for my own edification and pleasure. Being able to objectively compare one bow to another enables a person to understand what makes a bow tick.

I am a firm believer in PRECISELY measuring each and every component that effects a bow's performance. How much energy does it store? How much of that stored energy is delivered to the arrow? How fast does the arrow travel? How accurately is the Force/Draw curve developed and measured? How accurately is the weight of the arrow measured? Is the bow drawn precisely to the same point every shot? Are carbon arrows without fletching used? What rest material is used if shooting off of a shelf?

I am also a firm believer in testing bows in a consistent manner, like Blacky does or like Norb did. Otherwise comparisons are harder to make. My belief in the above notions may not fall in line with how others feel, but having tested many different bows of many different designs the above methodology has worked well so far for me.
With arrow speeds, comparisons get more difficult and accurately controlling parameters is more critical. DFC's, stored energy, smoothness are different and can be done quite well with simple equipment available to most archers. A lot can be learned from comparing the shapes of these curves, as opposed to the absolute values. You have to look at all the data collectively and understand the tolerances. The problem comes when we try to make more from the data than is there. There are some questions you cannot answer with simple equipment. We can at least focus on the stuff we can answer.
 

·
Banned
Joined
·
548 Posts
Sylvan,

I don't discount Hooter Shooter data at all. It represents the theoretical upper performance limit which is important to know.
It really doesn't, that was my point. It only represents the "upper performance limit" if the parameters that you use are intended to illustrate the upper performance limit.

In the case of Blacky's testing, his testing parameters are actually pretty average, based on what people have said they actually shoot in terms of strings, arrow weight, and draw length.

The Hooter Shooter doesn't decide what parameters to use, it only makes sure that each shot is the same. Reality in, reality out. Fantasy in, fantasy out.
 

·
Banned
Joined
·
548 Posts
The use of a shooting machine would make it impossible for most on here to submit data. How much of a spread would there be between a good release and an average one?

Is there not some sort of formula to work out what a 9gpp arrow would shoot at if the tester is shooting an 8gpp arrow.
That's why it is imperative, for the purpose of comparison, that the baseline be void of all those shooter idiosyncrasies. If it isn't the resulting data is useless.

It's not until you have the baseline that the differences in arrow weight, string choice, draw length, and release style can be factored in. If your release sucks, it is likely going to suck equally with each bow being compared. If you shoot a lighter arrow, a smaller string, or draw an inch longer than the baseline, it will be pretty consistent and can be added to or subtracted from the baseline.

John Havard said it better than I ever could, but in my opinion, if all you want to do is collect and compile data for the fun of it, the baseline doesn't matter. If you want to actually compare something, there has to be a constant to work from.
 

·
Registered
Joined
·
7,516 Posts
But that's just it guys you will be compiling a data base of Hooter shooter speeds. I truly don't care how a bow preforms on a hooter shooter. For bowyers like John and Mike this info is critical and I'm sure a huge part of their limb development process. For me it tells me how fast a limb is under ideal situations and honestly that is a factor pretty low on my list. All high end limbs are going to be close in speed. What I care about is how a set of limbs react to my own personal idiosyncrasies - my release - my bow arm - how I tune a bow. No data base will ever give me that info.

Matt
 

·
Registered
Joined
·
204 Posts
Not to beat a dead horse further, but another word about precise measurements.

I could give a bow to Rod Jenkins and have him shoot it and the shots would vary perhaps 1fps consistently. That's a small difference. If I were to shoot the same bow the resulting arrow speed would vary a lot more than Rod's. My draw length might vary by 1/4" or 1/2" from shot to shot or my release might not be as clean from one shot to the next. Rod (as an example) would not have nearly as many variations as I would. So it's not impossible for an individual with a good release to get useful data, but the individual needs to essentially be a shooting machine for the data to be useful for comparisons.

HOWEVER, with that being said, any individual who wants to test bows still needs to pay strict attention to all of their measurements. The draw weight of the bow must still be precisely measured at precisely 28" or 30" (or whatever) AMO. That means that a draw weight scale properly calibrated should be used - not some old spring-type scale that might be off by a couple of pounds. Arrow weight should be precisely 9gpp (or whatever) at the measured draw weight (not marked draw weight). And as long as the individual's draw length never varies by more than 1/4" from shot to shot then the numbers will be reasonably meaningful. If strict attention is paid to all of these measurements then it's possible for good results to be obtained.

Finally, a shooting machine is absolutely necessary for meaningful numbers. If the archer holding the bow is a shooting machine then fine. But if the archer holding the bow is human like me then the results will not be useful for comparisons.
 

·
Registered
Joined
·
1,505 Posts
I really have never bought a bow or firearm, based on test specs.

They may be a guide to point me in the right direction, but, when it comes down to it, I buy bows and firearms the same way as I buy shoes.

I try them on.. LOL
 

·
Premium Member
Joined
·
12,452 Posts
On TT, AT, or the English forum you would be pretty foolish to report bogus data or data that is not reproducible by usual test means. There is a number of very keen individuals that test archery equipment that don't have a dog in the fight.

It certainly is not difficult or expensive to set up your on testing lab.

I don't think a data set can be the Ultimate criteria for the selection of an archery kit. That is data set from lab conditions. On the other hand a data set produced by you shooting your bow is only useful to you and means nothing to me really.

Considering this I do test my kits. I don't publish the results because it is useless to you. Some of my favorite choices are not to near the "top" numbers. If I had purchased by data set criteria I would not be a happy archer.
 

·
Premium Member
Joined
·
2,497 Posts
Hanks point about the shape of the DFC is very important. Questions like which limb is suited for my riser length and draw length, how hard do I want to push the limbs, do I want smoothness or a "back wall", will i be over-stressing my limbs if I use a 13" riser are all easily answered by comparing the DFC of limbs of different lengths and geometry, from the same maker or between different makers.

In my opinion, information gained from simply looking at the shape of the DFC (and smoothness) is actually more important than the absolute precision of magnitude. Graphical information combined with reviews from expert shooters gives us a very effective basis for understand how a bow is likely to feel and perform for an individual archer.

As an information freak I would love to have ATA like performance measures provided by independent testing labs but I am certainly not going to hold off buying a bow until such a standard is universally followed. Even if perfect speed data were available I would still own a chronograph and draw board, measure draw length, draw weight, arrow weight, and FOC. I would still learn to tune a bow capable of shooting better groups than I can consistently achieve. IMHO, Hanks DFC comparison database is incredibly useful to archers, helping us understand bow fit and feel and in making decisions, abstract speed performance, less so.

Rasyad
 
  • Like
Reactions: Harpman

·
Civil but Disobedient
Joined
·
10,696 Posts
Discussion Starter · #18 · (Edited)
It really doesn't, that was my point. It only represents the "upper performance limit" if the parameters that you use are intended to illustrate the upper performance limit.

In the case of Blacky's testing, his testing parameters are actually pretty average, based on what people have said they actually shoot in terms of strings, arrow weight, and draw length.

The Hooter Shooter doesn't decide what parameters to use, it only makes sure that each shot is the same. Reality in, reality out. Fantasy in, fantasy out.
I think we are saying the same thing. Full factorial testing is very difficult and time consuming which restricts what anyone is able to do. That is what I would really want to do if I had the time and resources.
 

·
Civil but Disobedient
Joined
·
10,696 Posts
Discussion Starter · #19 ·
That's why it is imperative, for the purpose of comparison, that the baseline be void of all those shooter idiosyncrasies. If it isn't the resulting data is useless.

It's not until you have the baseline that the differences in arrow weight, string choice, draw length, and release style can be factored in. If your release sucks, it is likely going to suck equally with each bow being compared. If you shoot a lighter arrow, a smaller string, or draw an inch longer than the baseline, it will be pretty consistent and can be added to or subtracted from the baseline.

John Havard said it better than I ever could, but in my opinion, if all you want to do is collect and compile data for the fun of it, the baseline doesn't matter. If you want to actually compare something, there has to be a constant to work from.
And this is the rub. It is not the data that is a problem, it is the interpretation, and sometimes, the desire to use data in ways where it is not appropriate. Speed is an interesting problem and not something that my data addresses in a rigorous way (nor do I claim to cover it rigorously). You can consider the bow the system, or the bow and archer. You could choose a single configuration (baseline) and test it very accurately, and precisely. You could also give the bows to 100 archers, collect statistical data and do the appropriate correlations to key factors such as experience, draw length, shooting style, etc.. Each addresses different questions. We need to make sure that we are addressing questions with the right data.

One thing that kills progress is not being able to move forward without perfect data. I have seen that in business, and in my work as a scientist. You have to be able to work through the grey area rather than expecting that everything can be put in black and white terms. That is where it is important that we look at as much data as we possibly can and allow our own understanding to grow in the process. We have to build this picture, progressively, just like it would be done in the scientific community.
 
  • Like
Reactions: MAT

·
Banned
Joined
·
548 Posts
And this is the rub. It is not the data that is a problem, it is the interpretation, and sometimes, the desire to use data in ways where it is not appropriate. Speed is an interesting problem and not something that my data addresses in a rigorous way (nor do I claim to cover it rigorously).
I couldn't agree more Hank. Which is why (in my opinion) you can/should only measure the things that can actually be measured, and leave the rest up to the individual archer to extrapolate from those baseline measurements.

We can discuss, till we are blue in the face about things like smoothness, and how a DFC should illustrate how something feels, but not everyone feels things in the same way.

I'm NOT a scientist, and I'm sure 99% of the archers out there aren't either. However, common sense tells me that if a given bow is 5% faster than another, all other things being equal, it's also going to be the one that is more efficient, and has the most stored energy. Can it really be less efficient AND faster? Can it have less stored energy AND be faster. The key is, of course, all other things being equal.

So, at the end of the day, other than speed, what else is there to measure? All the rest has to be left up to the individual archer and cannot be graphed, plotted, or programmed. At least not in terms of where it will actually mean anything to anyone other than the one touting it.

The way I see it, and again, this is only my opinion, you can only truly measure so much, the rest is marketing. Even things like torsional stability get a little wonky. Yes, "it" can be measured, but what does more of "it" mean? How do I benefit by having more of "it?" I can measure my wife's hair. I can factually say that of all my friends wives, mine has the longest hair by far. But so what? What does it mean? Does it make her a better wife? Mother? Cook? How would I even measure those things?

Some people like to collect and shuffle data for the sole purpose of collecting and shuffling data. They think it's fun. That's cool. My brother is like that. He tracks everything from the mileage on every tank of gas he buys, to batting averages, to how many times he cuts his lawn in a given year. Apparently he finds it useful for something, I just think he's nuts.
 
1 - 20 of 60 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top