cleverlymadeup said:
ratix2 said:
cleverlymadeup:
i realise that your example is just to demonstrate a point, however it doesent do it very well. for electronics a 5% failure rate IS considered within normal limits, few electronics manufactures can even claim to have a less than 2% failure rate, let alone under 1%.
actually most companies are like that. it does help having worked in a factory and also having friends who do qa testing. when we had anything more than 1% that were failing qa standards we stopped our machines
look at cell phones, such as the blackberry, since they've come out i can count on 1 hand how many i've run into that have broken cause of a manufacturing defect and i've worked in environments where every manager and above have a blackberry and there's several thousand employees in said organization
yet i've met several 360 owners who've had the RROD, in fact that majority of the owners i personally know have had the RROD and that's a much smaller sample of people than the blackberry or frankly any other cell phone
your logic is rather bad and frankly just plain wrong, mine comes from experience
unfortuanetly from what i gathered your experience comes from your tenure in qa testing only, which IS different from out in the real world.
quick question, did you guys pull ever product from the line to test it or just a few samples from each batch?
most compaines DONT pull ever product and instead just some samples from each batch. if the failure rates for those samples is above a certain percent then the line is stopped and the entire batch is tested. the problem here is that testing random samples is like polling a random sample of 1000 people from a population of a few million, you dont get the kind of results that youd get from the entire population or batch.
for example, with processors even microscopic dust particles can cause massive damage to the chip. lets say that 15% of the chips on a wafer of 300 have enough dust particles to cause major damage. now say that qa has a month to test a minimum of 30,000 chips before they ship out. from that batch of 300 chips 100 are taken out for qa testing. if the sample size were an accurate representation of the number of chips that would fail under qa testing then 15 of those chips would fail, however only 45 of those 300 chips would fail, yet there is less than a 10% chance that even one of those chips would even be pulled out for testing. if only 1 of the affected chips comes out bad then the sample size is inefficent to determine the final failure rate of the entire batch. this is the thing about the law of probability, and this does happen.
my point is this, qa testing in these cases cannot effectively determine the eventual failure rate of the entire batch.
second, different companies and different products have different failure rates as well as different acceptable failure rates. you mentioned the blackberry, but for another example lets talk about the iphone, itouch and different generations of the ipod. all of those were notorious for having high failure rates, yet all of those were also after some time in use (though usually less than one year) their failures ranged from overheating, exploding batteries, leaks of battery acid and many other things.
my point here is that qa testing does not have the time to effectively test these products over a long period of time. hence the old saying, its the consumers that actually test these devices for the companies.
now lets talk about graphics cards. diamond is one of the few companies that manufactures graphics cards that can claim they have lower than a 1% failure rate, for most other companies that failure rate is much higher, as high as 7-8% for some, but most within the 3-5% range. yet these companies claim that failure rates are within acceptable limits. reason being diamond hand testes EVERY card they make, but at the same time they produce much fewer cards than many of their competitors. these companies dont take the time to test every card and instead only test a few out of each batch. as i said earlier, while the sample size may fall within acceptable limits for failure during qa testing many times it wont predict the eventual failure rate for the entire batch.
finally, lets talk about the wii and the ps3 for a second. both the wii and the ps3 have reported failure rates around 5%, yet both companies consider that within acceptable limits. so how is it then that they do not have class action lawsuits aganist them for what you describe as abnormally high failure rates? its because there is a difference between acceptable failure rates on the qa line and acceptable failure rates overall.
im not arguing with you on the fact that the 360 has/had an abnormally high failure rate, at 16-30% it is damn high. but you do need to understand something, the rrod IS caused by long term overheating, not something that can be found by a few hours with a few test consoles out of each batch in qa testing. as there is a difference between acceptable failure rates on the qa line and acceptable failure rates overall there is also a difference between these failures being caused by issues that will rear their ugly heads in a short time versus issues that will pop up after a more signifant amount of use than qa testing has to test these things.
but of course microsoft should have known that it would be an issue. there is only so much heat a heatsink can dissipate, and had ms done effective testing before release they would have found out that more heat was being generated than could be effectively dissapated and should have redesigned the heatsink/cooling solution of the system. make no mistake, im not defending microsoft here, what im doing is debunking why your assesment of failure rates is fallacious, your only taking into account your personal experience with qa testing and not the business side of things, nor are you taking into account the acceptable failure rates of the other console makers or what other companies consider acceptable failure rates, which for most companies is around 5% eventually. this is one of the main reasons why warranties exist.