chimpzy said:
cleverlymadeup said:
chimpzy said:
Wikipedia on failure rates. [http://en.wikipedia.org/wiki/Failure_rate#See_also] Check what is on the bottom of the 'See also' list. I think its hilarious.
also editing the article on Wikipedia, just before you make the post is stupid and you're an idiot
http://en.wikipedia.org/w/index.php?title=Failure_rate&action=history
yeah there's a history function and you can see what was changed, until today and just before you made the post it said Xbox360
so you fail at trying to be smart, fail pretty badly
anyways it has always puzzled me why people think it's a great system when they've had to replace it several times, sure it gets replaced but that's not the point, they shouldn't hvae to replace it in the first place
You presume a lot when you have absolutely no proof I made those changes. Anybody can make changes to wikipedia and that includes you. See what I did there? Not nice, isn't it? Besides, why would I change Xbox 360 to PS3 when I myself own a PS3 since launch and like it because it doesn't break down on me, while I abandoned the other for that same reason?
Anyways, don't call people an idiot without having anything to back it up.
ok let me do it this way, it was changed at 19:52 GMT, you posted at 19:57 GMT, considering it would have taken you several minutes to type out your post, the fact that the ip used to change it belongs to a european group, it's very easy to point the finger directly at you who did it
also you were the one who pointed out that there was something. so by using some very good deductive reasoning, you were the one who did it.
once again you fail at this game
ratix2 said:
unfortuanetly from what i gathered your experience comes from your tenure in qa testing only, which IS different from out in the real world.
wrong it's where you can actually get experience to COMMENT on stuff like failure rates
quick question, did you guys pull ever product from the line to test it or just a few samples from each batch?
yes several times, we've had to pull 100% several times for having a bad product and those were for well over 1000 parts
most compaines DONT pull ever product and instead just some samples from each batch. if the failure rates for those samples is above a certain percent then the line is stopped and the entire batch is tested. the problem here is that testing random samples is like polling a random sample of 1000 people from a population of a few million, you dont get the kind of results that youd get from the entire population or batch.
yes they do, there's TONS of recalls
this is where your logic is very bad, the way testing works is every X amount of parts you test one. this is a proven manufacturing standard and part of the ISO9001 standard and several others.
it is nothing like randomly picking one person out of 1000. you do get very good results on the batch and how well things are going
for example, with processors even microscopic dust particles can cause massive damage to the chip. lets say that 15% of the chips on a wafer of 300 have enough dust particles to cause major damage. now say that qa has a month to test a minimum of 30,000 chips before they ship out. from that batch of 300 chips 100 are taken out for qa testing. if the sample size were an accurate representation of the number of chips that would fail under qa testing then 15 of those chips would fail, however only 45 of those 300 chips would fail, yet there is less than a 10% chance that even one of those chips would even be pulled out for testing. if only 1 of the affected chips comes out bad then the sample size is inefficent to determine the final failure rate of the entire batch.
my point is this, qa testing in these cases cannot effectively determine the eventual failure rate of the entire batch.
actually it DOES prove how well it works, that's the whole point of qa testing. qa testing is one of the main reasons that the Japanese became such a great manufacturing entity. they blew the American car industry out of the water with their cars simply because they qa'd and didn't Statistical Process Control
second, different companies and different products have different failure rates as well as different acceptable failure rates. you mentioned the blackberry, but for another example lets talk about the iphone, itouch and different generations of the ipod. all of those were notorious for having high failure rates, yet all of those were also after some time in use (though usually less than one year) their failures ranged from overheating, exploding batteries, leaks of battery acid and many other things.
my point here is that qa testing does not have the time to effectively test these products over a long period of time. hence the old saying, its the consumers that actually test these devices for the companies.
actually you once again don't understand how product testing works. when they make a new product, they don't just assemble it and then send it out into the market.
when a company first makes a product they make prototypes and use them for several months before releasing the product or even sending to the manufacturing
now lets talk about graphics cards. diamond is one of the few companies that manufactures graphics cards that can claim they have lower than a 1% failure rate, for most other companies that failure rate is much higher, as high as 7-8% for some, but most within the 3-5% range. yet these companies claim that failure rates are within acceptable limits. reason being diamond hand testes EVERY card they make, but at the same time they produce much fewer cards than many of their competitors. these companies dont take the time to test every card and instead only test a few out of each batch. as i said earlier, while the sample size may fall within acceptable limits for failure during qa testing many times it wont predict the eventual failure rate for the entire batch.
as for video cards, i'm going to blame the users rather than the cards themselves. there are too many people who have a video card and bad cooling in the case, so it's not that far of a stretch to figure out the card is failing cause of user error, not mechanical failure
finally, lets talk about the wii and the ps3 for a second. both the wii and the ps3 have reported failure rates around 5%, yet both companies consider that within acceptable limits. so how is it then that they do not have class action lawsuits aganist them for what you describe as abnormally high failure rates? its because there is a difference between acceptable failure rates on the qa line and acceptable failure rates overall.
and how many of those are actually user based errors? i'm willing to bet most of them were caused by bad things the person did rather than the actual system failure itself.
im not arguing with you on the fact that the 360 has/had an abnormally high failure rate, at 16-30% it is damn high. but you do need to understand something, the rrod IS caused by long term overheating, not something that can be found by a few hours with a few test consoles out of each batch in qa testing. as there is a difference between acceptable failure rates on the qa line and acceptable failure rates overall there is also a difference between these failures being caused by issues that will rear their ugly heads in a short time versus issues that will pop up after a more signifant amount of use than qa testing has to test these things.
but of course microsoft should have known that it would be an issue. there is only so much heat a heatsink can dissipate, and had ms done effective testing before release they would have found out that more heat was being generated than could be effectively dissapated and should have redesigned the heatsink/cooling solution of the system. make no mistake, im not defending microsoft here, what im doing is debunking why your assesment of failure rates is fallacious, your only taking into account your personal experience with qa testing and not the business side of things, nor are you taking into account the acceptable failure rates of the other console makers or what other companies consider acceptable failure rates, which for most companies is around 5% eventually. this is one of the main reasons why warranties exist.
actually the DID know about the failure rates, there's been a few documents that have surfaced over the years stating they did know it was there. they just wanted to be the first out the door and get a lead on everyone, they didn't care they would have a high failure rate
sure they can claim what they want but frankly basic testing would have easily exposed the RROD issue. considering when they do certain testing it would be very evident something was wrong. they just figured that releasing it asap was the best possible solution for them to get an early lead and be damned with the customers' issues. i think they miscalculated how bad the issue is