Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: The black art of silicon

  1. #1
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default The black art of silicon

    I've been reading up a lot about silicon fab processes, yields, and technologies. Leading into all sorts of scholarly/engineering analysis of the Dark Silicon problem. There's tons of info out there, it's becoming an entire discipline in itself, but internet resources are always stonewalled by proprietary secrets and NSA barriers (silicon yield figures, and even the most trivial approaches which can increase them, are highly sensitive topics which apparently equate to corporate/industrial megaprofits and military/aerospace superiority).

    We all know that a complex part is designed. And a batch is produced. And not every particular piece in any given production run will meet 100% spec, some parts will have defective cores or defective cache or defective functions or electrical issues or can only operate at slower speeds or whatever. So the parts are binned, there is the "perfect" top-end part, there are a variety of "imperfect" lesser parts which have fewer cores, less cache, slower speed, etc. Some parts are outright defective, nonfunctional, or simply fail to perform at minimum thresholds. The proportion of perfect vs imperfect vs defective parts is referred to as the yield.

    The actual binning tier is somewhat arbitary. For example, Intel's latest Haswell-E parts come in 5960X (3.0-3.5GHz/8-core/20MB/40PCIe), 5930K (3.5-3.7GHz/6-core/15MB/40PCIe), and 5920K (3.3-3.6GHz/6-core/15MB/28PCIe*). So it's possible that a particular 5920K might, for example, happen to have some combination of up to 3.5/3.7GHz, 6-8 working cores, 15-20MB cache, or 28-40PCIe*, just not enough to qualify for a proper 5930K or 5960X rating. Sometimes (very rarely at first, but more and more often as a new cutting-edge lithography process matures) a company like Intel will have very high yields and be in a position where it competes with itself, it will actually bin higher-rated parts as lower-rated parts so that it can still supply low- and mid-end pricing tiers. Thus we have happy overclockers who sometimes luck out and get a lesser part to perform as well as (or even outperform) a superior part.

    (* I am aware that the 5920K is a deliberately feature-crippled part locked down to 28PCIe lanes, basically just included to round off Intel's initial Haswell-E offerings with a low-priced entry. But I'm ignoring that here since I only used the Haswell-E group example because other chip families include too many members to list without more wall of text.)


    I've observed similar patterns with earlier Intel chips, AMD's Tahiti chips, and NVidia's Kepler chips. And have not found any answer for the question I originally asked:

    Why does it seem like parts with fewer active cores (and/or less integrated cache/functionality) can operate at higher clock specs? They appear to have the same electrical parameters and same TDP spec. The same die with the same transistor count and density (although some entire "bad" blocks of transistors are deactivated). To me this means they have the same "power budget" - meaning that while they should work at higher speeds given the same power level, the better (slower, more complex, more functional) parts should be comparable when the "power budget" is increased. Yet, this doesn't seem to consistently be the case. Why can the simpler parts work faster, since they're basically just underspec complex parts?

    [Edit]
    Is the answer just based on the manufacturer's arbitrary product structure? In my above example, is it just that Intel discards 5920K and 5930K parts which fail to perform at higher-than-5960X speeds so that the consumer faces a more "balanced" purchasing choice? Or perhaps that yields of faster 5960X parts are too low, so they've lowered the threshold? Such draconian marketing, destroying some of your own good products just to control pricing tiers, has been observed with Intel before, especially when they dominate a niche. And, no doubt, Intel can repurpose semi-functional Haswell-E dies (or their components) in some other product line.
    My mind says Technic, but my body says Duplo.

  2. #2
    Will YOU be ready when the zombies rise? x88x's Avatar
    Join Date
    Oct 2008
    Location
    MD, USA
    Posts
    6,334

    Default Re: The black art of silicon

    Before I start any of the below, let me just state for the record: I am not involved in this industry, and have not made any more than a casual study of the industry and happenings therein. As such, most of what I am about to say, while often based on observations I have made of industry actions over the years and what I know of the general issues at play and the history of the technologies in question, is either raw speculation or educated extrapolation based.

    That being said, let's take off the tin foil hats for a second and look at this as the engineering problem that it is.

    Quote Originally Posted by Konrad View Post
    Why does it seem like parts with fewer active cores (and/or less integrated cache/functionality) can operate at higher clock specs? They appear to have the same electrical parameters and same TDP spec. The same die with the same transistor count and density (although some entire "bad" blocks of transistors are deactivated). To me this means they have the same "power budget" - meaning that while they should work at higher speeds given the same power level, the better (slower, more complex, more functional) parts should be comparable when the "power budget" is increased. Yet, this doesn't seem to consistently be the case. Why can the simpler parts work faster, since they're basically just underspec complex parts?
    I think that this has less to do with power used by any particular part of the die, and more to do with the heat dissipation capabilities of the die/heatspreader as a whole.

    Let's take the 5960X (8x3.0GHz) vs 5930K (6x3.5GHz), for example (note: I am using the base frequency only, for reasons I will expand upon later). Now, we know that that the CPU cores have a certain level of electrical efficiency. For the purposes of making the math simpler, we'll assume for the moment that E is constant (yes, I know, it'll vary depending on operating temperature, operating frequency, die binning, proximity on die to other cores, activity of nearby cores on die, etc). For a very simplistic calculation, let's say that the heat generated by the die is determined by the formula:

    Heat = (Core Count) * (Core Frequency) * Efficiency

    If we were to subsequently assume that the efficiency value were the same for both bins, then it makes sense why there is a difference in core frequency.

    Supporting this line of reasoning is the operation of the 'Turbo' mode. The way that the 'Turbo' mode works is that it dynamically overclocks certain cores when so instructed and able. This key point to notice here is that only certain cores on the chip are overclocked. It is also important to note that the CPU can only sustain 'Turbo' mode for a certain period of time. This is all designed not to save power, but rather to limit heat production. The die and factory-mounted heatspreader are only capable of transferring a certain level of heat energy, no matter how good the heatsink to which they are connected.

    Quote Originally Posted by Konrad View Post
    Such draconian marketing, destroying some of your own good products just to control pricing tiers, has been observed with Intel before, especially when they dominate a niche. And, no doubt, Intel can repurpose semi-functional Haswell-E dies (or their components) in some other product line.
    It's not "draconian marketing", it's meeting market demand.

    Let's say you are Intel, and you have 3 different binned chips (call them A, B, and C). Let's say A has 6 cores, B has 4 cores, and C has 2 cores. Otherwise, they are identical.

    It costs you (Intel) the same amount to make all of these chips. They come off the same die, they end up with the same physical interconnects and heatspreader, and they have to go through largely the same QA testing. The only reason there are different binned values is because, well, we don't honestly know how to make consistent dies. This is a very real science and engineering problem, and has been around since the earliest ICs were made, back in the 1950's, and has been caused by a variety of sources over the years. The earliest cause was (iirc) crop dusting done by a local farmer. IIRC, the current issues are thought to largely arise from quantum fluctuations in the metamaterials used in the manufacturing process. Needless to say, this is a Hard Problem, and something that is likely going to be sticking around for a while.

    Because of the vagaries of the manufacturing process, 10% of the chips meet the QA requirements for A, another 30% meet the requirements for B, and the remaining 60% only meet the requirements for C. So even though each chip cost you the same amount to manufacture, the available supply of each is vastly different. This is why the higher binned chips cost more. It is not because the chip manufacturers are trying to screw us over, or because they are money-grubbing villains. It is simple laws of supply.

    In order to make enough profit to meet your targets, you need to bring in $300,000 for each production run of 1,000 chips. Based on those income requirements and the bin yields, you price chip A at $1000, chip B at $334, and chip C at $167. This way, each bin brings in $100,000 per run.

    Now, consider the market.

    There are going to be a fairly small number of customers who need chip A, but that is good because there aren't that many of them to begin with. These relatively few customers, however, are going to be able and willing to pay the price premium, so all is good. These are your professional and 'pro-sumer' customers.

    There will be a larger group of customers who need chip B, but they will still be in the minority. These are your enthusiast customers. They are a much larger group than chip A, but have less individually to spend on their CPUs, though they are willing to spend more than the average used. Fortunately, this all works out well, and they buy chip B.

    The majority of your customers are going to want chip C. It is the cheapest, and they do not need the power of either chip A or B. They will always buy the cheapest CPU available, and they are the main meat of your market.

    As long as the demand of each of these three groups roughly corresponds to the supply of the three CPUs, all is well with the world. Where things get messy is when demand out-strips supply.

    The really bad scenario would be if demand for chip A outstrips your supply of chip A. These are your highest profile customers and your lowest yield chip; a bad combination for a supply shortage. This is likely why the release cycle for Xeon's is what it is (ie, new architectures lag by ~6-12 months for E5's and 1-2 years for E7's); they likely start making the chips at the same time for all lines, but take the additional time to build up stock of the much lower yield SKUs.

    A supply shortage for chip B is relatively unlikely to happen, as your supply statistically outweighs the demand (ie, % yield of B > % of market that wants B).

    But what if there is a supply shortage for chip C? Well, by definition, any of chip A or B met all of the QA requirements for chip C, so either could be substituted in. You really don't want to use chip A because it is such a low yield chip, but if the demand for chip B is sufficiently below demand, it might make sense to start badging/flashing some units of chip B as chip C. As a result, you are able to meet the demand of this customer group without impacting any other customers.

    The upshot of this is that, yes, sometimes you get chips flashed and sold as lower yield SKUs.


    To my knowledge, the above at least roughly describes the behavior of all large-scale manufacturers of complex ICs.

    One prime example of the down-flashing for lower SKU groups was the Athlon II generation of AMD CPUs. It was a fairly open secret that the 3-core chips could often times be unlocked and run as a 4-core chip. Of course, success was not guaranteed, but if you were lucky enough to get one of the down-flashed chips, you were good. I believe this happened because AMD had introduced the 3-core chips as a way of getting a little more revenue from chips that fell between the QA for 4 and 2 core bins, and underestimated the segment of the market that would be interested in such a chip. Because demand outstripped supply, they were forced to cannibalize a lower yield SKU to meet the demand for a higher yield SKU.
    That we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours, and this we should do freely and generously.
    --Benjamin Franklin
    TBCS 5TB Club :: coilgun :: bench PSU :: mightyMite :: Zeus :: E15 Magna EV

  3. #3
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default Re: The black art of silicon

    Haha, I happen to have an Athlon II X3 440 (3.0Ghz), running at stable 3.8GHz overclock as a 24/7 fileserver, it's only crashed maybe three times since 2006. But repeated attempts to activate the "bad" core on this part have all failed, strongly suggesting that it is indeed a bad core.

    Likewise, I have a weird XFX DD Radeon HD 7870 (part FX-787A-CNAC "7870v GHz Tahiti Edition") which is actually "a cut down 7900 series [Tahiti] chip", "a special firmware flash version of the card that doesn't exist in the market", "a special part", "only 200 pieces worldwide", "world exclusive available to Aria only". I wonder if XFX had a bad production run or if they just kept throwing bad cards into a pile until they had 200 pieces to sell to a vendor like Aria. (I also assume that these Tahiti chips, GDDR5 chips, etc, were all binned by AMD/TSMC and binned again by XFX prior to final PCB soldering - so how and why exactly would they go bad after such meticulous sorting? ICs derated from thermal stress?) My particular piece was originally made to be an XFX DD Radeon HD 7970 (part FX-797A-TDJC, identifiable through odd card length, power connectors, and video outputs), the other Aria cards may or may not all be identical, it's just impossible to discern from the internet noise floor. I was able to install a modified 7950 VBIOS and basically (re)enable the firmware-locked capabilities to make this card a 7950 performing at faster-than-7970 clocks. Even though it was cheap, and even 3 years later, it's still a fantastic card (and it's CrossFire compatible with all other Tahiti cards, too!). I'm thinking of selling it, lol, because no AMD is a Titan - but it's hard to let go of one-of-a-kind killer tech, lol.

    Your answer suggests another possible answer for my question:

    There's probably performance variations for each core within the same die. But the entire part can only be rated at the speed of the slowest core. More cores means more chance for one to be a lemon, fewer cores means more chance to lock out the slowpokes.

    I've learned that binning also selects for parameters we aren't often aware of: lower-leakage chips for use on same-package-multi-processor-die systems (like, for example, crazy top-end dual-GPU cards), lower-latency caches for use on high-bandwidth streaming, maximum stability undervolts for use in radiation-hardened applications, etc. I suppose the list of special requirements could go on forever, any conceivable (if obscure) application for digital logic.

    Internet conspiracy theory suggests that Intel is culling out the truly superior Haswell-E parts, either providing cherry-picked pieces to their mission-critical medical/military/aerospace clients or simply hoarding them for another high-performing enterprise-level processor rollout. While sources of such information are always entirely dubious, the notion does seem sound. I'm not attempting to depict Intel as a soulless greedy megacorporation ... they gotta make their money, of course ... but I do find it plausible that niche consumer markets take second place to more lucrative executive contracts. End result: we get some nice toys, and enjoy the innovation, but we don't really have access to the very best toys. It seems believable enough.
    My mind says Technic, but my body says Duplo.

  4. #4
    Yuk it up Monkey Boy! Airbozo's Avatar
    Join Date
    Jun 2006
    Location
    In the Redwoods
    Posts
    5,272

    Default Re: The black art of silicon

    With regards to Intel hand picking the best binned silicon,

    I am not sure why this is an internet conspiracy theory. Intel has stated many times over the last several decades that they do cherry pick the best parts for certain customers, internal and external. Some probably end up in government projects, but most would end up in labs or test centers to validate the maximum operational parameters so that their massive research projects get some feedback and verification. That part of the story is most likely hush hush to protect the projects and their goals.

    Take the release last year of the pentium processors (or should I say re-release). Intel did this for a couple reasons, one of which was to see what the overclocking community could do with the processors. They always pay attention to what the fringe community is doing with their gear and this helps their internal research as well.

    I was given some "special binned" versions of the Intel Enterprise class 3500 240GB SSD's for testing with the caveat that I could not disclose the test results. Those special binned parts became the performance series 730 consumer grade SSD's. For some reason they found that 240GB was the sweet spot (for reasons I still do not know), and the performance you get with 2 240GB 730's in striped mode is better than 2 480GB 730 SSD's in striped mode. Even the smaller ones were not as fast (have not tried myself).

    It's all in the magic of the Silicon.

    Way back when I worked at SGI and nVidia was accused of stealing SGI's IP, part of the settlement dictated that SGI got the top 10% binned GPU's off the line for use in their work stations and research projects. The cards we got were easily 10-20% faster than the same card available in the regular market.
    Last edited by Airbozo; 01-19-2015 at 08:55 PM. Reason: Changed 120GB to 240 and 240 to 480 (got my info wrong)
    "...Dumb all over, A little ugly on the side... "...Frank Zappa...

  5. #5
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default Re: The black art of silicon

    Well, I've taken the dive and ordered myself a 5960X. I'm hoping that the Haswell-E samples sent to the media for reviews (which typically overclock to around 4.3-4.5GHz without excessive voltages/temps/cooling) were not cherry-picked specimens, leaving all the slower pieces for sale to end-of-chain prosumers like me.
    My mind says Technic, but my body says Duplo.

  6. #6
    Will YOU be ready when the zombies rise? x88x's Avatar
    Join Date
    Oct 2008
    Location
    MD, USA
    Posts
    6,334

    Default Re: The black art of silicon

    As Airbozo pointed out, the cherry-picking results in a different model. Actually, this is what the 5960X is: a cherry-picked tier of the Haswell-E i7 die. So, if the reviews sent to the media were 5960X's, the chip you get will meet the same criteria as they did. Now, if the samples the reviewers were provided with were un-binned engineering samples, then your guess is as good as mine.

    Since last commenting on this thread, I ran across a good example of the Intel special-binning CPUs for certain customers. The AWS EC2 C4 instances run on a platform that runs the Xeon E5-2666 v3[1], a CPU that is provided by Intel only for Amazon. Now, for all of Amazon's talk of "custom processors [...] optimized specifically for Amazon EC2", all the evidence that I can find points to these CPUs simply being higher-binned E5-2660 v3[2] chips*. This is a great example of Intel finding a buyer who will (presumably) pay more for CPUs that meet a higher QA level than they normally have a market for. This allows them to get more profit out of the small percentage of chips that actually meet that higher QA level rather than simply binning them as 2660's.

    [1] http://docs.aws.amazon.com/AWSEC2/la...instances.html
    [2] http://ark.intel.com/products/81706/...Cache-2_60-GHz

    *Disclaimer: This conclusion is based on what little publicly available information about the E5-2666 v3 I have been able to find, correlated with specs of other CPU models and a healthy dose of skepticism.
    That we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours, and this we should do freely and generously.
    --Benjamin Franklin
    TBCS 5TB Club :: coilgun :: bench PSU :: mightyMite :: Zeus :: E15 Magna EV

  7. #7
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default Re: The black art of silicon

    Mixed feelings because my shiny new 5960X has just arrived ... from Malaysia.

    There is much internet babble in the OC communities about chips from Costa Rica consistently performing better than those from Malaysia. Many (not all) of the most impressive overclocks documented online claim to have been achieved on Costa Rica parts. There even seems to be some ignorant racism, condemning the work ethics and capabilities of cheap and lazy Malaysian workers.

    I note that these markings do not originate at the fab plant but at a testing/assembly/packaging plant. And that each fab facility maintains deliberate (and expensive) efforts to consistently produce universally interchangeable, essentially indisguishable parts which fit same spec. And yields are basically equal across the board, in terms of statistically random variance of individual samples within overall product batches. I also like to believe that the engineers and technicians (and tools and facilities) in these extremely lucrative and extremely high-tech plants are - hopefully! - equally educated, capable, and qualified regardless which part of the world they happen come from, work, and live.

    Intel shut down their Costa Rica fab 10 months ago anyhow, after suffering from a few major fires and various other production-stopping setbacks. And their USA-fabbed 59xx Haswell-E parts have never been seen on consumer processors. So it seems they will all be marked in Malaysia anyhow.

    Wish me luck on my OC efforts! Hoping that Malaysia gave me 8 uber cores running at >4.5GHz and <1.3V (with low temps on air-cooling, haha). I will report results once I finish making my custom reinforced heatsink-support backplate thing, lol, my (rated at 350W TDP) Raijintek NEMESIS TISIS cooler is one hugely massive bloody monster I don't wanna dangle unsupported off my new mobo.

    [Edit: Yes, I have fallen into the "trap" of massive air-coolers vs All-In-One/Closed-Loop-Coolers and such. This is basically the best cooling available without doing a proper custom loop/rad setup (which hasn't been taken off the board, yet). And I think more efficient, better idle/load temps, similar overall noise, a little less cost (even after upgrading the fans with Noctua Industrials). I like the lower longterm maintenance requirements in air cooling. And I like that, absolute worst-case catastrophic failure, my processor will still have >1Kg of metal heatsink sitting on top instead of swimming in a steamy puddle of heated liquids. Besides, I very much prefer the look of professionally-machined black-nickel-plated quality metal towers over the look of a piece of cheap plastic decorated with a stupid LED and childish picture of a pirate ship. Just no accounting for taste, eh?]
    My mind says Technic, but my body says Duplo.

  8. #8
    Fresh Paint
    Join Date
    May 2015
    Posts
    2

    Default Re: The black art of silicon

    IS PLANNED OBSOLESCENCE REAL? - BY TIO AND COLIN CULBRETH

    PART 1

    Due largely to the economics education administered in schools, many people have become familiar with the concept of the Law of Supply and Demand. This concept is heavily discussed in schools across the world and is often depicted as the lifeblood of the economy in a capitalistic monetary system. It is said to be the backbone which holds up its very structure. We are also taught that the health of the economy is dependent on consumer spending and that without it, the economy would stagnate.

    Without consumer spending, the whole money system would fail; this would have disastrous effects on many areas of social life that make our lives convenient and comfortable. However, this is only partially true. Forget what you were told about economics in school. ‘The Law of Supply and Demand’ does not exist - it’s a myth.

    Allow me to explain.

    When people buy products, the respective businesses and corporations experience an increase in sales and revenue. This signals to them that their products are in demand. The revenue generated from sales enables these businesses to pay their employees, who then spend that money on more goods and services provided to them by other businesses and corporations, and these establishments begin to create more goods and services in anticipation that customers will continue to buy (supply).

    It’s a cyclical process. However, in truth, demand is an illusion. Think about it…factories do not produce goods in order to keep up with a real-time demand.

    When a person wants a car, auto manufacturers and dealerships do not assemble one for the customer made-to-order. The cars are already made, despite there being a demand or not. This holds true with every product on the market. Toy factories do not wait to assemble children’s toys until a parent requests one, any more than cell phone companies wait for a customer to request a new cell phone to be made. Therefore, what is referred to as "demand" is actually a model, based primarily on past sales trends, that predicts future sales without anticipating any unexpected socioeconomic changes that could potentially disrupt business. If suddenly the people stopped buying a particular item, thousands upon thousands of products could find their way to a landfill. This is because machines in most modern facilities are automated and are only designed to create a product and work continuously, not account for actual demand. If demand were accurate, there would be no reason to discard perfectly good merchandise. Everything would be accounted for (unless the product was defective).

    The idea of working to eliminate waste is not really considered at all by corporations.
    The money that will be made from selling products is the goal, while waste is only minimized out of fear of wasteful spending (reduction of profit). Factors like environmental impact of waste handling, product necessity, or improvements and upgrades are not considered in evaluating the consequences of waste. This is because corporations want to ensure constant revenue.

    This means corporations have to find ways of persuading consumers to continue buying their products, which also increases the demand for their products. This demand is artificially created in several ways: through planned obsolescence (products with short lifespans), by trying to manipulate public opinion through advertising (people who buy things are happy, beautiful, and fulfilled), by pulling at emotional strings or selling fear (fashion and beauty products), and by anyone else pushing hard to make a buck (Reebok’s Easy-Tone Shoe Line). That is putting it as simply as I can.

    One of the most common ways of ensuring that customers keep spending money is through planned obsolescence. For those who are not familiar with this notion, planned obsolescence is the idea (or method) of purposely designing a product to wear down, break, lack compatibility with other devices, and/or become obsolete or out-of-fashion after a relatively short period of time. In other words, if I make a smartphone, I will design it so it breaks down after—let’s say—one year or so, which means that you’ll need to buy a new one from me every year. This way, I can keep my business flowing by making and selling more smartphones. I could also make the smartphone non-upgradeable, so that when a new camera comes out, you cannot replace your smartphone’s camera with the new one. You will have to buy an entire new smartphone instead.

    Another type of planned obsolescence is to make a product seem out of fashion. For instance, I can advertise my new phone to look superior to the old one and make the old one look awful, even though there is little difference between their functionality.

    We are all familiar with seeing a new smartphone coming out every six months to a year, but is this all part of some shady plan to keep consumers buying products, or is it just a normal part of everyday business?

    I started to think about this when I realized how slow all my new computers had become after just one year of use. It did not matter which operating system was installed, they just became slower and slower in a very short period of time.

    One time, I attempted to disassemble my HP laptop to clean it of the dust that accumulated inside and often caused overheating. I started the job, and soon I realized I needed two types of screwdrivers.

    A minute later, that had increased to five different types of screwdrivers because the screws were all different types. I ended up using no fewer than eight different screwdrivers to access the cooler. Luckily for me, my father had all the screwdrivers in the world, even for screws that are not even invented yet . But I was shocked because there is no reason at all in my mind to have all these different screws for the same purpose. I later asked my father (an engineer) if these various types of screws serve different functions, and he, just as surprised as myself, said that they don't.

    This got me thinking about other products. I realized that old bicycles, such as the ones that my father has had since the 70’s, surpass the new 21th century bicycles that I had.

    Though my father's bicycles were far older than mine, they require less maintenance than my "newer" bikes, which need repairs every few months after their first year of use.

    Another example is with all the printers that I have had over the years. I never understood why it often costs more to buy new ink cartridges than to buy a new printer with ink cartridges included. Another issue is when one of the color cartridges became low on ink (but not entirely empty), it would prevent me from printing with any of the other colors…

    That is either a very stupid design, or purposeful. Additionally, some printer makers add a timer to their cartridges so they 'expire', even if there is still plenty of ink remaining.

    By the way, right now I am using an ASUS laptop (this model) in which I cannot access anything beyond the RAM and hard drive. It is sealed completely. So if I ever want to clean it from dust so it doesn't overheat, there is no way for me to do that or anything else except for exchanging RAM chips and hard drives. Likewise, on my sister’s tablet, I had to resort to ‘non-official’ methods to manually upgrade its old, lagging Android operating system in order to make it compatible with the latest apps, because the company that made the tablet decided to make it non-upgradable.

    I first heard of the term “planned obsolescence” from Jacque Fresco in one of the Zeitgeist films, and I admit, it sounded a bit like a conspiracy theory to me. I mean, is it really possible that this world is so awful that it creates products that break down on purpose for mere business profit?

    Well, I did my own investigation and what I found was really surprising to me.

    In 1924, when the American national automobile market began reaching saturation, the head of General Motors suggested to change the design of their cars each year in order to convince people to buy a new one every year. This really sounds like a conspiracy theory, but it’s not.

    The interesting part here is that this concept was borrowed from the bicycle industry, which was already using the same tactic.

    This idea turned out to be very profitable for GM, which surpassed every other automotive company on the market at the time. Small companies could not keep up with this aggressive tactic, so they went bankrupt.

    Henry Ford, a major name in car manufacturing at that time, did not agree with this practice. He wanted to design simple and cost efficient cars. But guess what? General Motors surpassed Ford's sales in 1931 and became the dominant company in the industry thereafter.

    That same year, another group of people, applying similar ideas of planned obsolescence, created light bulbs that were only designed to last 1000 hours. They had nothing to lose or fear, I suppose, since they had not had any competition at all for the previous 20 years.

    They were accused of holding back technological developments that could increase the lifespan of the light bulb. What is amazing is that their association levied fines on their own member’s bulbs found to surpass the 1000-hourmark. So if you tried to make a better light bulb, you were made to pay a fine.

    The people behind this plan claimed that it was to optimize most bulbs, and that a longer lifetime could be obtained only at the expense of efficiency, since progressively more heat and less light is obtained as the lifespan is increased, resulting in wasted electricity. Of course, some argued to the contrary.

    Ok, that maybe is true. Maybe they really did need to limit light bulb lifespans in order to optimize their efficiency. How should I know? The main point here is to show that it is possible to intentionally design things to last only a certain amount of time before they break down.

    In 1932, as a response to the first major depression, a type of crisis where the people’s invented game (the money game) doesn't work as planned, some proposed a plan that would have the government impose a legal obsolescence on consumer goods to stimulate and perpetuate consumption.

    Brooks Stevens, "a major force in industrial design" (as New York Times describes him), later popularized this idea even more.

    By his definition, planned obsolescence is "instilling in the buyer the desire to own something a little newer, a little better, a little sooner than is necessary."

    As Wikipedia states: “A common method of deliberately limiting a product's useful life is to use inferior materials in critical areas.” For instance, screws can be made with a soft metal that easily wears down, products that use batteries that cannot be easily replaced, or requiring batteries that are custom made and can only be replaced by a specific company (usually the original manufacturer).

    “Planned obsolescence is sometimes achieved by placing a heat-sensitive component adjacent to a component that is expected to get hot. A common example is LCD screens with heat-sensitive electrolytic capacitors placed next to power components that may warm up to 100 °C or hotter; this heat greatly reduces the lifespan of the electrolytic capacitor. Often, the goal of these designs is to make the cost of repairs comparable to the replacement cost, or to prevent any form of servicing of the product at all. In 2012, Toshiba was criticized for issuing cease-and-desist letters to the owner of a website that hosted its copyrighted repair manuals, to the detriment of the independent and home repair market.“

  9. #9
    Fresh Paint
    Join Date
    May 2015
    Posts
    2

    Default Re: The black art of silicon

    PART 2

    DID YOU KNOW THAT INTEL WORKS ON THE PRODUCTION OF THE NEXT GENERATION OF PC CHIPS BEFORE IT HAS EVEN BEGUN TO MARKET THE LAST ONE IT CREATED?
    IT IS LIKE THEY SOMEHOW KNOW THAT PEOPLE WILL BUY THE NEXT ONE, TOO .

    If you want to change your iPhone’s battery, you need a special screwdriver, because the battery is encased into the phone. More than that, it’ll cost you around $79, just $20 short of the typical subsidized price for a new iPhone 5C.

    Another thing that iPhone users complain about is that upgrading to a newer operating system on older iPhones makes them slower, and it is extremely difficult to revert back to the older, better-functioning software. That is, it is better to buy the new iPhone if you want to upgrade.

    Of course I cannot tell for sure whether or not Apple is really engaging in “planned obsolescence”, but their product designs and actions arouse suspicion when you know they can be made better.

    When a person purchases a computer, the buyer expects their investment to last for a few years, at least this was the case over the last decade or so. But recently computer companies have stepped up their quotas and sales. These days, computers need to be replaced at a far more rapid rate than in past years. We are told that this is because the speed of innovation or technology is increasing so rapidly. But again, this is only partially true.

    Many programmers purposely make computer software which can only be run on certain operating systems. Therefore, the consumer has no choice but to go out to the store and buy a brand-new computer, despite the fact that their old computer works just fine.

    Computers are expensive and the annoyance of planned obsolescence turns out to be quite a financial disaster when a person wishes to upgrade their computer, but ends up having to purchase a brand-new one.

    If you have any experience with computers at all, you’re likely familiar with software updates. Recently, I had to purchase a new Apple MacBook Pro. I previously had the original MacBook, but in wishing to incorporate new programs into my old computer, such as Dragon Dictate, I was hit with the realization that my computer was far too outdated to support it. Over the five years since I had purchased it, the Apple operating system had gone through “Snow Leopard” and “Tiger” versions, and had recently
    introduced “Lion”. The programs I required were only available on the new Lion operating system. However, because the operating system on my old laptop was non-upgradable, I had no choice but to purchase a whole new computer, even though there was absolutely nothing wrong with my old MacBook to begin with; it still runs fine to this day.

    So the question is, why did Apple Computers need to upgrade through multiple operating systems? Why not simply continue to upgrade Snow Leopard and eliminate the need for all the excess waste resulting from outdated software, hardware, and even entire computers? The answer is again simple. Apple Computers needs to constantly generate revenue to outdo their competitors. This is a very smart business plan for corporations, but it is also extremely wasteful and unnecessarily complex, and is something that we can move past now due to what could be achieved with our present level of technology.

    Of course it is sometimes hard, if not impossible, to tell when a company deliberately does something like this for profit. I showed you some companies that are highly suspected of using planned obsolescence but don't admit it, but some other companies may be more open about the topic.

    Whatever the case may be, there is no doubt that this strategy can be very profitable for business and even harmful if not adopted, as one Canadian company learned firsthand.

    The company built an armed vehicle for the Canadian army 14 years ago, and they did a pretty good job.

    So good, in fact, that when they unveiled a more recent and improved model to the military, with the hopes of selling the new vehicles for a $2.1 billion contract, the Canadian army said 'we don't need them. The old ones are quite good.'

    That company lost 2.1 billion dollars because its’ products were too well-made. So, it’s important to understand that, in today’s monetary system, a business can go bankrupt if they produce great products that do not need maintenance or replacement for many years.

    The thing is, this idea of planned obsolescence cannot be properly defined because you cannot know the actual intent of companies. They can adopt this strategy, while at the same time not admitting it. How can you say to Apple that they are using this tactic when they use special screws for their cases, when they can say: “Well, that’s our design”?

    At first I thought it might be just an idea with no real basis in reality, but now there is no doubt in my mind that planned obsolescence is just a marketing strategy that some, perhaps many, adopt.

    When I look around, I see lots of cars and many new ones for sale, and I wonder what is so new about the new ones. The same thing goes for computers, smartphones, and many other products. All of the people that I know use their smartphones for simple internet services like Facebook, email, and some other basic functions, not for resource-hungry games or apps, yet many usually buy the latest models. Why are they replacing perfectly good smartphones every time a new one is released?

    I used to have the coolest phones in the town. When the new and cool Nokia N Gage came out, I was the first one to have it. Until 2007 or so, I was obsessed with mobile phones - until I realized that they are all basically the same. After that, I stopped upgrading my phone every time a new one came out. The reason I had bought so many was because of the social context: mostly advertising.

    Think about fashion. Clothes are basically all the same. No new feature to any new cloths. They are purely aesthetic bags with legs and arms, yet people constantly change clothes because of subjective, fashion-driven motives.

    If you produce laptops and make their power plugs different every 5 years or so, but maintain the same voltage and functionality, then your actions are inhumane because someone who has an older laptop may not be able to find a replacement power cable, rendering such old models completely unusable.

    What is even scarier is the huge waste of resources. Changing fashion and gadgets because they are not “cool” anymore, or purposely designed to fail, produces so much waste.

    So all in all, it is true that we live in a world run by primitive monkeys for their own personal profit who use many psychological strategies to make you buy their products: “It’s too ugly; you need a more beautiful one”, “It’s not fashionable”, “It’s not that good”, and so on. But all of this is the fault of the game the monkeys play in the concrete jungle - the money game which rewards you for such actions and even punishes you when you don’t follow them.

    Think about it. There are so many products in the world, like cars in showrooms, smartphones, tablets and pc’s in stores, batteries, furniture, and so on. Companies have to find ways to sell these products or else they will not make a profit, or even go bankrupt. Of course they will all try to make you buy them using many various tactics.

    This monetary system could not work if suddenly products were made so well that people won’t buy new ones for years.

    That’s the sad truth.

  10. #10
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default Re: The black art of silicon

    AMD has done well with a planned-longevity approach, though. Each new CPU socket is designed for backward compatibility with older CPUs, each new CPU is designed for backward compatibility with older sockets. As much as possible - at least - sometimes an entirely new platform needs to be designed to accommodate entirely new tech and it's just time to let go of the legacy anyhow. Just saying, that AMD is a surprisingly small company yet this strategy let's them float in the same water as megacorporate Intel. And we all know how AMD and NVidia both have recycled and rebranded and rebinned countless revisions of the same tired old "disposable" GPUs - a special example which seems to flaunt planned obsolescence.
    My mind says Technic, but my body says Duplo.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •