Tiny Modular Computers ---------------------- This is a preliminary draft of an article answering the question as to why "tiny computers" are going to become increasingly important. Modular computers have always been available for some time, typically in SO-DIMM form-factor from companies such as Triton, Direct Insight and so on. These modules however are only just now using CPUs that could be considered useful for general-purpose computing, but they have not hit mainstream because they are still not useful without having a matching motherboard. There is also increased interest, in the form of the Q-Seven Standard, EDM Standard and the ULP-COM Standard, to target the (very large) Digital Signage Industry to take an example. However, again: these standards *require* a matching motherboard, and are not really suited to being sold to the average person who is unaware of anti-static precautions and the need for keeping fingers off of the contacts. Then, also, thanks to the low-cost of small boards such as the Arduino and others, there is increased interest especially in the education sector in tiny "engineering-style" boards. At the more powerful end there is the Origen, Beagle Board, Panda Board and so on. Again, however, these require considerable handling care. The paradigm shift really only occurs when the computer is both tiny, mass-volume, affordable, good value for money *and* comes shipped in a robust case that's designed to protect the electronics, yet is also conformant to an Open Industry Standard that allows that computer to be plugged directly in to a wide range of electronics appliances: portable devices, low-power servers, desktop computers - everything. This is where the revolution in computing begins. [22Oct2012: the rest of this article is, at present, in note form, and will be edited and published shortly]. What are the benefits of tiny computers? --------------------------------------- The benefits of tiny computers is their ease of adoption, and the larger number of uses to which they can be put. 40nm geometries and below has reduced the cost, reduced the power consumption and increased the functionality of processors and memory, all at the same time. A classic well-known example is the Apollo Moon Rocket's Computer being less powerful than a calculator from the 1990s: that trend has now reached a critical threshold where mass-produced tiny low-cost computers can do the same job that an entire Desktop Computer from five years ago could do. With geometries continuing to shrink, this trend is only going to rise... exponentially. But just having smaller computers isn't really the key, here. If a computer of size 3in by 2in can replace a Desktop computer of size 12in by 18in of 5 years ago, so what? Most people already have Desktop and Laptop computers that really do not need replacing. What's really exciting is when those tiny computers become a user hot-swappable "module" (as opposed to a factory-installable module in e.g. SO-DIMM or MXM form-factor). Once the functionality of the "computer" is separated out behind a common standard, and the "computer" part can be upgraded as an off-the-shelf retail part with a faster, cheaper, lower-power and generally better replacement on an ongoing basis over the next decade or so, then there is a major paradigm shift in the entire Industry. This paradigm shift however simply could not happen until the power consumption dropped below 3 watts, nor the level of integration on the processors become so high as it has, nor the price drop to consumer-affordable levels. So the markets for which these tiny computers are suitable is very very large, and as the geometries decrease and the levels of integration increase, those markets are only going to increase. Why are they good for these things? ---------------------------------- Tiny computers typically comprise three major components in a highly compact space of between 3 to 6 sq.inches: SoC, RAM and NAND Flash. The SoCs (systems on a chip) themselves, rather than having an additional "peripheral chip", actually have all the peripheral support *built-in* to the chip itself. 2D and 3D Graphics, Ethernet, USB, SATA, HDMI, SD/MMC and much more - it's all on-board, all on one chip. Contrast this with the approach of having to run a massive power-hungry ultra-high-speed "peripheral" Bus (to a Northbridge and Southbridge IC and then on in some cases to a separate very power-hungry Graphics Card). Instead, the CPU can be connected to the peripherals *inside* the chip, saving on both the cost of an extra IC as well as saving enormous amounts of power. The implications of this however are that each SoC has to have a custom kernel, and the QiMod approach is to rationalise that behind a de-facto Industry Standard (EOMA), thus making the route to adoption much much easier right across the entire Industry. However, the present situation is quite a stark contrast to this vision, because SoCs grew up from a specialist "vertical market" mindset. SoCs have traditionally been designed to target very specific markets, and their designers did not and do not imagine that their SoCs could be used for any other purposes. They hire Software Engineers to target those markets; their entire P.R. and their immediate customers are limited and restricted to those markets. Anyone not on their customer list is typically denied access to that SoC, for fear of them being a drain on the company's extremely limited software engineering expertise without a guaranteed return on investment, especially in "unproven" markets. Our approach - to place these tiny computers behind a Standard that is itself made up of lowest-common-denominator interfaces that have been well-established for over a decade - opens up the markets to SoC vendors but does so *without* placing any burden of cost onto them. Our approach therefore effectively turns specialist vertical-market processors into general-purpose computing platforms, simply by adopting an open public standard. Any SoC vendor can automatically sell directly into a pre-existing product place, with a diverse and flourishing market, by doing nothing more complex than creating a module around their SoC that conforms to the standard. It's also interesting to note that this shift only really started to be possible once the whole "tablet" market began to take off. The reason is because many SoC companies decided to cash in on the whole tablet thing, not really realising or forseeing that SoCs which are developed for that one specialist market (tablets) can be very very easily used to create general purpose modular mass-volume appliances. What aren't they good for? Why aren't they good for these things? What are their limitations as computing devices? ----------------------------------------------- This requires a bit of background to answer properly. First it's important to note, from above, that SoCs are tightly integrated: graphics, peripherals, power management - all on one chip. That means that there is no BIOS - which is highly significant and troublesome for everyone - and the burden of responsibility for tying everything together is placed directly onto the kernel (typically Linux). Here we see signs of the problems and challenges. The first challenge is that if a particular SoC does not have one particular interface that is needed for a specific task, then often you simply have to rule out the entire SoC. For example, some of the lower-cost ($7 and below) SoCs do not have HDMI, SATA or Ethernet. If you look up the cost of putting in Ethernet or SATA interfaces using USB, it's often $1 for a USB Hub IC, $1.50 for a USB-to-SATA converter IC and there exist ICs which have 3 USB ports and also Ethernet: they're typically around $3. Many of the ultra-low-cost SoCs do not have dual LCD outputs, so even if you added an HDMI converter IC (typically $3 to 4 and that's excluding the $50,000 HDMI License) you would still not get dual outputs. Given that the total cost of these components when added to the cost of the lower-cost SoC is often far in excess of any other SoC on the market that has HDMI, SATA and Ethernet built-in, it's an incredibly easy decision to simply eliminate that lower-cost SoC entirely from consideration, regardless of how good any of its other features might be. The second challenge is that each SoC is radically different from any other SoC on the market, and there are hundreds to choose from. Each SoC requires MASSIVE customisation of the Linux kernel, which is overwhelming both the SoC houses themselves as well as the Linux kernel community. To make matters worse, the level of integration within the SoCs means that there is no incentive to standardise when it comes to creating actual devices. It comes as no surprise therefore to learn that the burden of responsibility for encoding the peripherals (power-up sequences of ICs, GPIOs, keyboard matrixes etc.) falls yet again onto the Linux kernel. So not only is the SoC "core" support overwhelming the Linux kernel developers, but also the SoC vendors themselves feel obligated to take on this situation entirely onto their own shoulders on behalf of their customers. A typical SoC vendor will therefore only have a very very limited number of customers, helping them every step of the way to create the entire appliance. They will work only with very few ODMs who, usually will have signed GPL-violating NDAs, illegally restricting access to the GPL Linux kernel source code, claiming that it is entirely "their copyright", and so on. So to answer the question: without the QiMod approach, the uses to which SoCs can be put is very very limited. Specific markets, specific approaches, total control, high risk of having product impounded at Customs when it's imported, due to Primary as well as Secondary Copyright violation and so on. However: with the QiMod approach, the markets open up due to the standardisation and flexibility, but even here, things are dependent on the capabilities of the SoC itself. Due to power constraints, SoCs typically do not have the kinds of Graphics capabilities that are expected even of the average high-end Desktop PC, although, with the continued advances in shrinking geometries, it's fairly safe to say that within the next 3 years the 3D Graphics capabilities of SoCs will definitely exceed that "Good Enough Computing" threshold for most 3D purposes. Also, again due to power and pin-count restraints, running the kinds of interfaces typically seen on servers such as multiple gigabit ethernet lanes and multiple PCI Express lanes simply isn't practical in a 1 to 1.5 watt chip. So for high-end uses such as low-latency server farms, real-time video processing, ultra-high-end computer games and so on, tiny computers simply aren't useful or even practical. The high-speed peripheral standards seen on x86 hardware were simply never designed with low-power SoCs in mind. But it's worthwhile pointing out that as cluster farms and for low-power cloud computing, tiny computers are perfectly suited to being press-ganged into the server market, due to the space, power and cost savings, and the fact that many of them have SATA-II or SATA-III and Gigabit Ethernet. What technological developments (for example, smaller transistors) have enabled the development of tiny computers? ----------------------------------------- It's all down to geometries. 65nm SoCs of a few years ago were typically around 700mhz, were single-core variants and typically consumed about 1 watt. Some specialist variants could go as high as 1ghz, but they were quite rare. Now, we are seeing 40nm SoCs which are 1ghz, single or Dual-Core, and they're still only using about 1 watt. Once you get to 28nm, it's possible to make a Quad-Core 1ghz SoC, and again, they're still only using about 1 watt or thereabouts. By 22nm we will be seeing even Octal-Core 1ghz to 1.5ghz SoCs, again around 1 or 2 watts, and Quad-Cores of the same speed using around 0.5 to 1 watts. The trend therefore in the SoC industry is therefore to use the geometries to increase the level of integration and the number of cores rather than increase the power consumption. This is because power consumption follows a square law on geometries as well as on clock speed. So, if you keep the clock rate the same and the geometry shrinks by a factor of 2, it's possible to increase the overall computing performance of a SoC by a factor of 4 at the same power levels. With the QiMod modular approach, therefore, we can set a clear roadmap, confident that the performance and power efficiency will increase dramatically. However as shown above, this approach only really works if standards are set. Without standards such as EOMA, the advances in technology will continue to increase the burden on the software development ecosystem. Why were tiny computers impractical in the past? Why are they practical now? -------------------------- The question isn't so much whether they were practical, it's whether they were even desirable. Twelve years ago, SoCs ran at a maximum speed of around 100 mhz and were typically put with about 32mb of RAM - 64mb if you were lucky. Everyone else in the world was running on the Windows OS with at least double the CPU speed and double the RAM. Then around 2006, the Android OS and the introduction of 700mhz+ SoCs matched with 128mb or 256mb of RAM blew everything away and completely changed the game. However even when Android made the Linux kernel and OSes other than Windows more legitimate, and even when the Apple tablets poured fuel onto the fire and encouraged SoC vendors to create better SoCs, things still really didn't open up, and in many ways they still haven't. There's a concept called "Good Enough Computing" which was coined around 2009. It's the critical threshold point where computing devices really do not need to follow Moore's Law to create faster and faster devices, because people simply don't need them. Instead, following Koomey's Law we can keep the speed roughly the same and decrease power consumption instead to the point where devices become smaller, consume less power, require less battery and generally cost less. Again however it's worth emphasising that if matters are left solely in the hands of the SoC vendors to deploy their "vertical market" specialist strategies through their existing supply chains, end-users aren't really going to see the benefits of this revolution, not in the near future at least. Without the EOMA standard, vendors will continue to create specialist appliances that are purpose-restricted, or they will create engineering boards that are low-volume and therefore costly, or they will create SO-DIMM or MXM "modules" which can only safely be installed by a factory not an end-user, and they will almost certainly continue to create mass-volume products that are GPL-violating, or DRM locked (non-rootable), or both, all of which is to the detriment of the end-users. By setting and adopting standards such as EOMA, however, the limitations of these approaches goes away. The software is standardised; the modules are standardised; the devices which use those modules are standardised. Why are so many tiny computers being developed now? -------------------------------------------------- The number of licensees of ARM and MIPS cores is huge - hundreds - and that's only going to increase. There's always been tiny computers around, in the form of Engineering Boards and Modules: most of these are re-using MXM, DIMM or SO-DIMM form-factors due to the easy availability of the sockets because of their mass-volume original uses. It's just that the cost of such Engineering Boards, because of the lower volumes, have typically been $250 to $1000 for a device which, at least until recently, could not even run a standard useable general-purpose OS and required a motherboard to power it up. Features, speed and interfaces are all now increasing, and prices dropping, but they're still Engineering Boards and all still need special handling by technically-competent individuals. Now what has happened, as mentioned above, is that thanks to Android legitimising Linux, and thanks to many of these devices being over 1ghz and having 512mb to 1gb of RAM as well has having standard interfaces such as USB, Ethernet, HDMI and SATA, they're opening to the realm of general-purpose computers, with the additional exciting benefit that they're perfect for tinkerers as well. So there is now quite a large market, where pricing from 2 years ago used to be $150 but is now dropping to as little as $70 more recently for such "Engineering Boards", and these boards are fully-functioning computers with built-in features that would easily outperform a Desktop PC of 6 maybe 8 years ago. So, coming from the original specialist and highly locked-down markets of Mobile phones, IPTVs and hand-held products such as tablets, it's simply that the increasingly demanding requirements of these markets - running games, running videos and browsing the web and so on - have pushed up the processing capability to the point where these SoCs happen to be useful for general-purpose computing as well. At that point, they have the potential to become open for use by anybody, which is just incredible. What are developers trying to accomplish? ---------------------------------------- SoC teams are typically following the market trends. By contrast, we're taking a holistic view of the entire industry and coming up with a solution that would improve the situation for everybody: SoC teams, OEMs, ODMs, retail stores, the shipping industry and of course the end-users themselves. So, whilst SoC teams are working on vertical markets, looking to improve the ways in which they can make life easier for their present clients by adding in new embedded features into their SoCs for future markets, we're creating a paradigm shift that takes them and their future plans into account but opens up their markets even further, by treating their SoCs as being one of the critical components of a general-purpose standardised mass-volume computing platform. What important things does this mean for the future of computing or technology in general? --------------------- The most important thing is that the QiMod approach is a step back into tiny, low power, affordable stand-alone and mass-volume computing. It's a paradigm shift that happens to only have been possible around now. Systems that are EOMA-compliant do not have to be complex. The devices that the CPU Cards plug into can be 2 to 4 layer boards, having nothing more than a few peripheral and power ICs and some connectors. Look at the Motorola Atrix Lapdock product, and combine it with one of these USB memory stick computers for example. It's possible to buy the Atrix Lapdock for about $65 to $70, and it's possible to buy little USB-OTG computers with an HDMI output for about $50. The combination of the two products means that you can get a fully-functioning 12in laptop with an 8 hour battery life for about $120 retail, and the end-user can install literally any software that they can find or create, anywhere in the world. Now, that combination - Lapdocks and USB-OTG computers - isn't standardised. It's a complete fluke, and it's one that Motorola is actually objecting to and has locked down in future revisions of their Lapdocks, failing to recognise the potential here and frustrating creative individuals everywhere. Imagine however if there existed an Industry Standard where it was possible to continually upgrade either the processor card or the actual device, instead of forcing end-users to buy an entirely new device just because the integrated battery failed after 3 years, or the screen was damaged out of warranty, or because the CPU wasn't powerful enough to run the latest apps? This is what the QiMod approach is all about: creating a stable, simple standard for mass-volume computing appliances that everyone right across the Industry can count on for at least a decade. We're at a really exciting and empowering time in the history of computing. 2012 will be seen as the time when the face of computing really started to transform. What new types of tiny computers will be developed in the next 5-10 years? What will they be used for? What benefits will they offer? -------------------------- It's hard to say, but there are some exciting possibilities, and some limitations. It's quite unlikely for example that large DDR RAM or large NAND Flash will be integrated into the same silicon as the SoC in the near future, because the processes are completely different for each. DDR RAM ICs are very specialist and regular, so the geometries are altered arbitrarily, to get the absolute maximum memory density. SoCs on the other hand have to have much more stability and a roadmap for development, so fixed geometries are picked and published so that the design houses such as Mentor Graphics etc. can provide stable ASIC development and validation software. 3D transistor development techniques may however alter this in the future, making it hard to accurately make predictions. Following the existing trends however, we can see that the future of the core silicon is driven by the process feature size progression: typically the speed is remaining the same (1ghz or just above) yet the number of cores is increasing instead, with the power consumption remaining about the same. This means that OSes will need to adopt to a multi-core approach. Graphics performance will also improve as fast as the general processing power, meaning that markets which were previously closed such as high-end gaming will open up as well. One of the key benefits may turn out to be the adoption of these SoCs in data centres. Power consumption of the top "cloud" computing companies such as Google is vast, yet all that power is, according to a recent report, being consumed by data centres in case we MIGHT need it. In other words, there is a vast amount of energy wasted by using x86 servers, just on running at idle! SoCs in these tiny computers are much more power-efficient at idle: they were initially designed for optimum battery life. It just so happens that they're becoming fast enough and have interfaces such as SATA and Ethernet making them suitable for use in data centres - a highly unexpected and unanticipated outcome. What new technical advances will enable the development of the new tiny computers? -------------- 64-bit, virtualisation, increases in clock speed, multi-core parallelism: all these things will help to increase the "respectability" of tiny computers. They're ready now, but the legacy of x86 and Windows, as well as the prior exclusive positioning of these SoCs in vertical specialist markets, is holding back their adoption more than anything else. So it is not so much an advance in technology that will help with the development and adoption of tiny computers, than it is about marketing and understanding the potential, here. What technical and marketplace challenges will future tiny computers face? ------------------------------------------------------------------------- Right now, the key limiting factor is the SoC vendors themselves. Due to the high level of integration and the overall complexity of their products, they're frightened to death at being overwhelmed with support calls. MStar Semi for example go to the extreme of prohibiting you from knowing *anything* about their SoCs. They ask you for the design criteria and they do absolutely everything for you - product design, software, tooling - everything. A few companies such as Texas Instruments and Freescale make use of Linaro to go to the other end of the spectrum, which is to provide a fully GPL-compliant Board Support Package, and even typically provide full schematics (ORCAD, Allegro and PADS) and Gerber files under Open Source Licenses. These companies recognise that, rather than acting out of total fear and paranoia that some competitor MIGHT copy their design, the easier it is for ODMs to design their SoCs into final products, the more SoCs they will sell. These companies are confident that in a market where, due to the increased competition the lifetime of a SoC can be as little as 12 months, it can take years to reverse-engineer a hardware design, and that even if you have full technical details of the APIs, it's often easier to start from scratch than it is to make a clone of some other company's video hardware for example - and why would you want to help promote another company's hardware engine where all the software is going to have your competitor's Copyright notices all over it? But Texas Instruments and the other companies which are working successfully with Linaro, and who are honouring the GPL Software License and who provide significant technical documentation are then burdened with the cost of that integrity: the cost of their SoCs goes up, as a result. TI in particular was, very unfortunately, forced to abandon the mobile market place, yet QiMod would love to put those very same SoCs into mass-volume EOMA-68 modules! By contrast, the GPL-violating SoC vendors, who typically do not provide any significant or accurate documentation, end up saving on costs that undercut those SoC vendors who act with integrity by quite a margin. We at QiMod believe that integrity and open-ness should be rewarded and encouraged: we will not work with Copyright-violating companies. So again: the question can be answered from two perspectives. Without the QiMod approach, there are significant challenges to overcome, even for tiny computers, with all the potential that people can see might be there. So you created another tiny computer which was targetted at a specific market, hoping to lock the ODM into adoption? So what: everyone else in the industry is hoping for exact the same thing, and it's not working, because the market's moving too fast. Evidence of this can be seen from an article in EE Times by Rock-Chip's VP, Feng Chen. In the Industrial and embedded markets where many of these SoCs came from, that strategy worked: many clients expect a 10 year support lifecycle. In the consumer world, end-users buy products based on features, and the sheer overwhelming number of SoCs coming onto the market means that one product or one SoC gets about 6-12 months of glory and then it's all over. In 2007 it was the Telechips ARM11 SoC that disrupted the markets. In 2009 it was the GPL-violating AMLogic ARM Cortex A9. In 2011 it was the Allwinner A10 ARM Cortex A10, which is still in its prime for the time being. Each time a new SoC comes along, it MASSIVELY disrupts the entire market. The Allwinner SoC actually caused a major recession in Guangdong due to the collapse of any ODMs and factories who *didn't* adopt it. Suppliers who were holding stock of other components went bust as their guaranteed cash orders and contracts were reneged on. The QiMod approach therefore helps to stabilise this situation, because over half the product - the main chassis, whether it be a laptop chassis, tablet chassis, IPTV chassis and so on - can be sold as a separate stable and predictable item, just like the Motorola Atrix Lapdock. The other half of the product - the CPU Card - can be developed and upgraded on a separate timeline, with SoC vendors competing on a level playing field that is, ultimately, to their benefit. It's very hard for everyone to plan future development based on such unstable "glut and famine" cycles. So the existing legacy challenges brought about by the SoC industry's history are what is really holding things back. The QiMod modular approach, based around the EOMA-68 patented approach, helps stabilise the industry, allowing everyone to plan ahead, schedule and cost out future product designs with confidence. ------ Why is 3 watts the magic number? ------------------------------- 3 watts is the threshold above which it begins to become problematic to get the heat out of a device, without a fan or some other thermal dissipation such as beryllium springs on the motherboard and a copper contact point on the module, or special casework that's made of copolymers with high thermal conductivity etc. etc. Whilst such solutions exist, we did not wish to impose them onto the manufacturers, so we simply went, "right - 3.5 watts is your lot. you can go up to 5 watts for a short period of time, but don't push your luck". That way, the chassis designers can just make sure that the heat goes out through grille-shaped holes in the plastic, and not really worry about it beyond that. We do have a 10 watt 8mm x 54mm x 86mm CPU Card option for EOMA-68, best suited to x86 processors - but we're still waiting for AMD to respond to our emails, and we have absolutely no idea how to even go about contacting Intel: there's not even an email address on the web site, next to their very sparse datasheets. We'd love to hear from both companies, obviously. Support for power over ethernet? ------------------------------- That's up to the chassis designers. We've deliberately not made POE part of the EOMA-68 specification. If however someone wants to make an engineering board or a mass-volume product that takes 48v POE out before passing on the ethernet signals to the EOMA-68 CPU Card, that's entirely possible. We did not however want to force the cost of 48v POE circuitry on to CPU Cards where that feature would simply would never be used. EOMA-68 is a mass-volume end-user standard (like PCMCIA was): it's not an "industrial" standard. It just so happens that the cost of EOMA-68 CPU Cards would, because of the mass-volume pricing and also the open-ness, be highly compelling for use in engineering products. What is your vision of the benefits of this revolution that consumers might see? ------------------- there are several. they take quite a bit of explaining, though, so let me run through an example scenario. contrast this with the scenario which actually happened, where a friend of mine on low income bought 2 little Skytone Alpha 400s (look them up on google) as a christmas present for her 2 young daughters. she couldn't really afford them... and they were so "locked down" and also underpowered that they were completely useless. i'll use them as an example, showing the difference. * a single mum goes into a retail hypermarket. she wants to buy a computer for her daughter, but her budget is limited. she sees, on the shelf, three rows. - The first row is CPU Cards, all conformant to the EOMA-68 specification, ranging from £25 to £80. There is even a TV option (£35) which instantly turns any device it's plugged in to into an Internet Android TV and Media Centre. - The second row is chassis', all the way from 5in tablets (£20) to 7in "mini laptops" (£25) to 10in tablets and laptops (£50), desktop PCs (£15) and full 18in HD laptops (£100) and HD LCD monitors (£60). - The third row is battery packs, including a "blank" pack (£5) that takes AA rechargeable batteries; a 3 cell, 6 cell and 9 cell pack with increasing cost and increasing battery life. There is even a "mains" option (£5) which can either be used as a charger or it can be actually plugged in to the device permanently, for example to turn the tablet into a wall-mounted TV or a picture frame. * She only has a budget of £60 so she buys the lowest spec'd CPU Card, the 7in mini laptop chassis and a AA battery module. * The kids are delighted with their new toy. They can play games, music, twitter their friends, and so on. The cost of the AA batteries gets expensive so they soon buy rechargeable ones. * Grandma learns of this, and, as a birthday present in consultation with mum, buys an upgraded CPU Card for £30. They take out the SD/MMC card containing all the games, apps, documents and settings used on the older CPU Card, put it into the new CPU Card and wrap it up as a present. The older CPU Card gets sold on ebay and fetches £15. (note: instead of having to spend £70 on a new laptop, the actual amount of money spent was about £15, taking the sale of the older CPU Card into consideration. The performance of the newer £30 card, due to introductions of newer geometries, gives a whopping two-fold increase in performance to this tiny laptop). * After a couple of years, the 7in laptop is out of warranty and it gets dropped, and broken. Mum uses this as an opportunity to buy a slightly bigger laptop - a 10in one. As their daughter is getting a bit older, it's a good time anyway: she wants to watch films on a bigger screen, and she wants to type email to her friends on a bigger keyboard. * Very soon it becomes clear that AA batteries don't hold enough capacity, so daughter goes out with her pocket money and buys a bigger battery pack from the same store. She sells the older battery pack on ebay for £3. (note: instead of having to spend £200 on a brand new 10in laptop, the actual total amount spent is only about £70, thanks to re-use of the CPU Card. The e-waste savings are also considerable: only the 7in laptop chassis with its broken case and screen needs to be disposed of, not the entire machine). * After a couple more years, she's going to school, so mum goes back to the store and buys her an upgraded CPU Card with more memory, so that it can better handle web browsing, run GNU/Linux OSes and larger applications such as LibreOffice without choking. She keeps the older CPU Card as an emergency backup. (note: again, the e-waste savings and cost savings are considerable: instead of having to buy a £300 brand new laptop, only a £50 CPU Card is needed. Again, due to shrinking geometries, the power savings and performance increases are enormous). * One year into school, she decides to take a course in photography. She goes back to the store and buys an SLR Digital Camera Chassis (£50) with a standard Olympus lens fitting. she gets 2nd-hand Olympus lenses off of ebay. It's an EOMA-68 compliant Camera that even has a 3in LCD screen on the back. She slots in the older CPU Card and is able to store pictures on it. When she wants to review the pictures or watch the low-quality home videos she's made, rather than take the SD/MMC card out she simply transfers the ENTIRE CPU card into her 12in laptop chassis. It has a keyboard after all; it's easier to do that and use it to send the pictures to her friends that way. * Later on, she takes a degree in photography, and gets a job. Her mobile smartphone phone is EOMA-68 compliant and takes a 3G-enabled EOMA-68 CPU Card. her laptop, too. she has an EOMA-68 SatNav chassis in the car. when driving, rather than plug the smartphone into a carphone docking station, she takes the 3G CPU Card out and plugs it into the car's SatNav, turning the SatNav into a 3G phone.. now she can drive and surreptitiously take hands-free phone calls at the same time. * She gets to work, and takes the 3G CPU Card out of the SatNav, and plugs it into one of the many EOMA-68 compliant cameras that she uses: she has several, for backup purposes. A photographer's nightmare is equipment failures: EOMA-68 allows her to have several spare CPU cards. they don't take up much space. * The combination of the 3G CPU Card and her Digital SLR Camera chassis allows her to upload previews of the photos directly to the newspaper that she's working for, in real time. She wins a journalist photography award for capturing footage of police brutality, when the police do not realise that her camera with the 3G EOMA-68 CPU Card has the capability to operate independently when an "emergency" button is pressed, and upload audio or low-quality video over a 3G network to a secure off-site location in real time. * At work, she is able to take the CPU card out, with its pictures, and plug it into her workstation which is a 30in HD Quality LCD screen. There's no desktop computer box cluttering up her desk - the CPU Card goes directly into a slot on the side of the LCD. She is able to work directly on the pictures because the Imaging software is pre-installed on the exact same 3G-capable CPU Card that she uses every day in the camera itself. * Her boss tells her that one of her colleagues has a bit of a problem: a broken screen. Could she please use her laptop instead, temporarily, because this 30in HD screen is urgently required and they haven't got time to go out and get a new one because of the deadline. She grumbles, but, rather than save the work, she suspends the CPU Card, pops it out, puts it in the laptop and within 10 seconds she is carrying on her work, unhappy that she doesn't have the same viewing area. The next day, however, the replacement 40in LCD screen arrives and she is happier than ever. Unsurprisingly, it's an EOMA-68-compliant LCD screen. I could go on, here, but I believe the point is made: the options and possibilities are literally endless. Games consoles. Network Attached Storage (NAS) boxes. Ethernet routers that you can plug in a 3G card into, making it an emergency backup internet access point. Anything that takes a CPU, doesn't need vast amounts of power and can make do with one Ethernet connection, USB and SATA for connectivity, then as long as its size is under about 3.5 in by 3in by 0.5in it can be turned into an EOMA-68 compliant system. The savings are in land-fill, energy consumption, design costs, delivery costs - everywhere you look there's a cost, complexity and time saving. It's quite hard to comprehend why this *hasn't* been done before, until you realise that it simply wasn't possible to consider until two things happened: a) the x86-windows stranglehold was broken and b) SoCs grew up and became powerful enough, integrated enough, small enough and low-power enough, all at the same time. How do you see these SoC escapting the vertical market niche? ------------------------------------------------------------ Really? without using something similar to EOMA-68? I really don't see them escaping - not even with the SO-DIMM form-factor or the MXM form-factor (the Q-Seven Standard) both of which are too complex (typically 200 pins) and cannot be user-upgraded. That's not being funny: that's being realistic. SoC vendors are too caught in the "control" mindset, too afraid of being overwhelmed by the software complexity of their creations, swamped by support calls "how do i connect GPIO 7 to this USB device and err, i know you told me twice already but how do i program that GPIO pin to do the right job, again?". You only have to look at Texas Instruments or Freescale's online support forums to realise how much of a drain on resources these kinds of questions can be. Not every SoC vendor wants to be in that market: they simply haven't the resources, so rather than even get into that game, they entrench their mindset even further, shut out everything and everybody but their top 10 most trusted clients, and get on with the job. The problem is that if these SoC vendors also try to be Software Development Companies on behalf of their clients, they're not only shutting themselves off from the opportunities but are also exposing themselves to the fickleness of market forces. ARM has something well over 600 licensees, now, and there are new SoCs coming out from China, Taiwan and Korea every few months. Products based around these SoCs cannot be upgraded. If there's a better product that makes it first to market, then it just took over the market... but only for a few months, because you can guarantee that there will be a better, faster or cheaper but otherwise identical product coming out very very soon! How might they provide the most value in a data center in terms of reducing power usage? -------------------- In situations where fast response time and large number crunching or large databases are not required, tiny modular computers are ideal. Where these tiny computers excel in particular is where there is a large number of transactions to be processed, but each transaction is independent. Cloud computing, web serving and e-commerce is a perfect example. You'd have a separate big-iron Database Server running on x86 CPUs, you'd have a large number of low-power low-cost web server front-ends handling the transactions, and you'd have a round-robin DNS load balancer distributing the incoming requests across all of the tiny computers. Quite straightforward, and nothing that hasn't been done already or seen already in some large data centres: you'd just have to have the load balancer deal with quite a few more back-ends than you would otherwise have to do if you were using x86 hardware as web servers. Look at online hosting as another example. A company that offers virtual hosting of a 1ghz CPU with 1gb of RAM and 20gb of disk space would charge about £20 a month for example. Their electricity bill means that, even when idle, the physical machine behind that virtual hosting only manages to pay for itself after about 18 months. If on the other hand they had bought a "tiny computer" costing only £50 retail, the amount of electricity that it uses is so small by comparison that it would pay for itself in about four to six months. and, as it would be running a GNU/Linux OS, the clients really wouldn't know or care about the fact that it was an ARM or MIPS processor rather than an x86. Scale that example up to large cluster computing farms and you start to get a pretty good idea of how important this is to data centres and research facilities. Even in cases where individual computing modules cannot handle large database queries on their own, there's no reason why the data centre should not operate that on suitable big-iron hardware, shared across several tiny computing module devices, and operating at full capacity where the power requirements are then justifiable.