A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector.

Provisional warning: this is a «post in progress» regarding content and embellishment (links, syntax). For some paragraphs, information will be completed later and for some others it may change with time, until this provisional warning disappear.

Index.

I. INTRODUCTON.

1.1. Content description summary.

1.2 Sources of information.

II. THE IT SECTOR. DEFINITION AND OVERVIEW:

III. THE HPC SECTOR:  OVERVIEW.

3.1 Market-oriented classification of  HPC systems, market size and market dynamics.

3.2 Processors.

3.3. Storage.

3.4. Hardware interconnect.

3.5. System Software (OS) / middleware.

3.6. Programming Languages, software tools and Application software for parallel computing.

3.7. HPC as a service (GRID, SOAS, SaaS, CLOUD).

IV. HPC Sector vendors.

V. HPC Sector Applications. 

VI. Distribution Channels. 

VII. Trends and Challenges.

———————————————————————

I.INTRODUCTION:

1.1  Content description summary.

As promised in the last post in this one we present a very brief description of the global industrial organization in the field of High Performance Computing /  Supercomputing. The aim is to provide a superficial sectorial bird view, not a mole deep one.

I start describing the position of the supercomputing segment within the computing sector, then I describe the elements of a supercomputer (HPC system) and explain who supplies these elements.

In the next point  I provide some data about the supercomputer vendors and buyers, about the distribution channels and the corporate / marketing strategies of the agents acting in this field.

We omit regulation and policy issues but we include some information about IP strategies: the proprietary strategy, the semi-open software (wall garden) strategy, the open software movement, the free software movement and the most permissive kind of licenses coming from the academic environment, very close to Public Domain. Some IP interesting links:

Free software license

Proprietary software licence

Software licence (in spanish)

1.2 Sources of information.

Bellow a list of all sources I am using to make this post, classified in:

a) Institutional& Regulation and Policy &  Agencies Programs and projects &

b) Academic/Theoretical/Technical oriented sources

c) Technical to market oriented sources

d) Market oriented

e) Trends and Future oriented.

Of course, they are by far not comprehensive. I have made wide use of Wikipedia, selecting those articles I found good enough to be quoted.

a) Institutional, Agencies programs and projects, Regulation and Policy:

–EU-DEISA: http://www.deisa.eu/ and http://www.deisa.eu/about/partners/principal.

–EU-PRACE. We talked about PRACE in previous post. http://www.prace-project.eu/

–EU-EESI (Europeab Exascale European Initiative): this project provides an interesting report about the situation of supercomputing at an institutional level in UE, USA, ASIA. EESI web page http://www.eesi-project.eu/pages/menu/homepage.php and EESI report (a must!): http://www.eesi-project.eu/media/download_gallery/EESI_Investigation_on_Existing_HPC__Initiatives_EPSRC_D2%201_FF.pdf.

–EU-FP7-Planet HPC: http://www.planethpc.eu/.

–EU-ICT sector: http://ec.europa.eu/information_society/eeurope/i2010/docs/info_sheets/7-2a-i2010-innovation-en.pdf

–EU-EITO (here the prefix EU is only geographic, not institutional):http://www.eito.com/abouteito/editorial.htm

–EU-NTA (here the prefix EU is only geographic, not institutional; National Trade Associations): http://www.aetic.es/CLI_AETIC/ftpportalweb/documentos/POSICIONAMIENTO%20EUROPEO%20SOFTWARE(nov2008)_2.pdf

–USA-DARPA, USA-PITAC, USA-DoE, USA-DoD, USA-NSF,

–USA-PCAST: http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf.

–Japan:

–China:

–India:

–LatinAmerica:

–Rusia: Skolkovo (the rusian Silicom Valley).

b) Academic/ theoretical /technical oriented sources:

–The journal of Supercomputing (http://www.springer.com/computer/swe/journal/11227).

–IEEE Communication Society: http://dl.comsoc.org/comsocdl/

–A brand new book about HPC  (2010): http://www.lulu.com/items/volume_69/9237000/9237313/7/print/scicompbook.pdf

–A book about Interconnection Networks (2003): Interconnection Networks, an engineering approach (some parts can be accessed trough google books).

–The CPU Shack Museum: http://www.cpushack.com/.

–microprocessors timeline: http://processortimeline.info/

–Technical Resources on microprocessors: http://bwrc.eecs.berkeley.edu/CIC/

IESP (International Exascale Software Project): http://www.exascale.org/iesp/Main_Page

c)Technical to Market oriented sources.

— TOP500. The best source for checking what is going on in this HPC market is the TOP500 project web site or platform:   http://www.top500.org/project. Although this site is more technology oriented than market oriented it has interesting statistics, which refers to the 500 HPC systems in their ranking and not to the general HPC market.  How representative is the systems included in their ranking regarding the overall market ? I do not know. The statistics available in the site are about system architecture (Cluster, MPP, Constellation), processor architecture (vector or scalar), processor family (IBM-Power, NEC, Sparc, Intel, AMD), interconnect family (Infiniband, Myrinet, Gigabit Ethernet and many others), interconnect product (several available in the market), OS family (Linux, UNix, Windows, BSD-based and mixed) and OS concrete product (many available in the market.  Afaik, this list only includes general purpose systems, but we must not forget that there are special-purpose supercomputers.

–HPC challenge: http://icl.cs.utk.edu/hpcc/index.html and http://icl.cs.utk.edu/projectsfiles/hpcc/pubs/hpcc-challenge-benchmark05.pdf. Technical benchmarking.

–Graph500: yet another benchmark-based ranking of HPC systems, TOP500-like but testing a different kind of HPC applications: data intensive. http://www.graph500.org/ and http://www.oracle.com/technology/events/hpc_consortium2010/graph500oraclehpcconsortium052910.pdf and http://www.genomeweb.com/blog/move-over-linpack-here-comes-graph500 or http://www.technologyreview.com/blog/mimssbits/26128/?p1=A2,

–GreenTop500:

–MIT Technical review´s. Online media from an academic source:  http://www.technologyreview.com/blog/mimssbits/25913/.

–INsideHPC. Online Blog / media for HPC: http://insidehpc.com/?utm_source=insidehpc&utm_medium=header&utm_campaign=siteEngagement.

–HPCwire. Online Media publishing media for HPC:  http://www.hpcwire.com/blogs/

–HPC in the cloud. Online Media for Cloud computing: http://www.hpcinthecloud.com/home/.

–Computerworld. Online Media for computer science: http://www.computerworld.com/s/article/9199758/Japanese_supercomputer_gets_faster_but_draws_no_more_power

–Internet.com. Online media for ICT sector:  http://www.internet.com/.

–The Register. Online media for techies, The Register: http://www.theregister.co.uk/2009/05/28/sicortex_assets/

–NetworksWorld. Online media for Networks: http://www.networkworld.com/.

–Enterprise Storage Forum. Online media for servers, storage and networking and IT management: http://www.enterprisestorageforum.com/;  «The internet’s top destination for news and information on enterprise storage and data management trends, including product and industry news and features on cutting-edge storage technologies». Seems to be related with above site.

–Intersect360.com. Research and consulting in market intelligence for HPC sector: http://www.intersect360.com/ and some of its services http://www.intersect360.com/documents/InterSect360_Market_Advisory_Service.pdf. Very interesting their series: HPC User Site Census: Systems; HPC User Site Census: Processors; HPC User Site Census: Interconnects/Networks; HPC User Site Census: Storage and HPC User Site Census: Middleware. See: http://www.intersect360.com/industry/research.php?id=34. Intersect360 is a new company founded by former employees of Tabor group, publishers of HPCwire.

–Instrumental Inc.:  Consulting company in HPC field. http://www.instrumental.com/news_pr_121610.php.

–IDC consulting company specialized in ICT:http://www.idc.com/. This is the consulting company which has made the EU HPC´s strategic report. Its research page:  http://www.idc.com/research/reshome.jsp. The list of HPC research reports: http://www.idc.com/research/searchresults.jsp?sid=0 and the first report in the list: http://www.idc.com/research/viewdocsynopsis.jsp?containerId=225691&sectionId=null&elementId=null&pageType=SYNOPSIS. At first I thought its price was a 2 USD. But it is not.  This one is from 2009, 12 pages and is not cheap neither: http://www.idc.com/research/viewtoc.jsp?containerId=222718. fortunately IDC provides some  information gratis: http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22263210&sectionId=null&elementId=null&pageType=SYNOPSIS.

–iSuppli. BI for Electronics  market (ranking of semiconductor manufacturers):  http://www.isuppli.com/Pages/Home.aspx.

–IC insight:

–In-Stat.com:http://www.instat.com/Abstract.asp?ID=318&SKU=IN0703487ASM.

–HPC Forum (IDC site): http://www.hpcuserforum.com/EU/ and http://www.hpcuserforum.com/EU/index.html. Interesting downloads about  the institutional view of HPC: EU, DARPA and NSF,

–IT-Tude: http://www.it-tude.com/about_it-tude.html. Interesting for grid computing and similar SaaS solutions. White papers and Blog.

–Xbitlabs: http://www.xbitlabs.com/. For PC market.

–Plunkett Research Ltd / Market Research Company, all sectors including ITC sectors: http://www.plunkettresearch.com/Computers%20software%20technology%20market%20research/industry%20and%20business%20data  and http://www.plunkettresearch.com/Telecommunications%20market%20research/industry%20and%20business%20data  and http://www.plunkettresearch.com/Wireless%20cellphone%20RFID%20market%20research/industry%20overview  .

–Global Industry Analysts (GIA) / Market Research Company, all sectors. http://www.strategyr.com/default.asp.

–Gartner Inc. Market intelligence for IT professionals: http://www.gartner.com/technology/home.jsp

–Netcraft. self-definition: «a respected authority on the market share of web servers, operating systems, hosting providers, ISPs, encrypted transactions, electronic commerce, scripting languages and content technologies on the internet». http://news.netcraft.com/about-netcraft/. Very interesting their web server surveys: http://news.netcraft.com/archives/category/web-server-survey/.

–Computing.es: spanish online media for general ICT sector: http://www.computing.es/Noticias/201003180019/NEGOCIOS-Indra-lidera-el-ranking-del-sector-TIC-en-Espana.aspx. This links to a ranking, by year revenues, of 2009 ICT spanish firms.

–TOP100 Research Foundation: SoftwareTOP100 http://www.softwaretop100.org/global-software-top-100-edition-2010 and http://www.softwaretop100.org/global-software-top-100-edition-2010-the-highlights; HardwareTOP100: http://www.hardwaretop100.org/hardware-companies-top-100.php andhttp://www.hardwaretop100.org/hardware-top-100-the-world-s-largest-hardware-companies.php ; IT Services Top100: http://www.servicestop100.org/ and http://www.servicestop100.org/it-services-companies-top-100-of-2010.php and http://www.servicestop100.org/services-top-100-the-worlds-largest-it-services-companies-2010.php.

–Truffle100: http://www.truffle100.com/downloads/2010/Truffle100_2010.pdf and http://www.truffle100.com/ and http://www.truffle.com/ and http://www.truffle.com/images/cms/actu_617/01.20.2010_T100Clusters_Eng.pdf.

–EDN: http://www.edn.com/index.php. Self-defined: «EDN.com is a comprehensive information source for the EOEM (electronics original equipment manufacturer) segment, providing in-depth technical information for electronics design engineers and news and strategic business insight for executives».

d) Market Oriented

Vendor web sites and reports.

Intel report on the information explosion: http://download.intel.com/technology/computing/archinnov/platform2015/download/RMS.pdf.

Users web sites, expert blogs.

Daniel Nenni. Silicom Valley merchant foundries expert: http://danielnenni.com/.

The blog of Matt Reilly, one of the founders of Sicortex:  http://www.bigncomputing.org/Big_N_Computing/Big_N_Computing/Big_N_Computing.html.

e) Trend and futuristic oriented sources.

Nextbigfuture: a blog of the Lifeboatfoundation with news about disruptive science and technologies, news labeled as supercomputer: http://nextbigfuture.com/search/label/supercomputer

———————————————————————

II. THE IT SECTOR. DEFINITION AND OVERVIEW:

Roughly speaking the sector of computing systems include any device whose components are hardware electronic technology-based components (memories, processors, input/output devices and the connections trough which all these elements can communicate, the network or interconnect) combined with software components (system or application software) and packed physically together over one or several component  boards and inside a box.

All these electronic and software components can be combined in multiple ways to become a marketable product, that is product that can satisfy the computing needs of some economic agent (individuals or corporations). At present several market segments has crystalized in this sector. I am aware that there are simpler classifications (see the Wikipedia article on Classes of computers) but I stick to a classification which has a more market oriented flavour.

Another definition for the ICT sector: http://www.eito.com/definitionsICT.htm and http://www.eito.com/definitionsCE.htm and

For simplicity, I exclude devices which combine electronic and mechanic or other physico-chemical or biological technologies. According to this, we will not include  segments  such as  robotics or sensor networks systems (see for instance  the Wikipedia article on Wireless sensor network). These fields are very interesting and subject in our days to intensive research and development, but its inclusion would make this post too long.  We may devote a special post to them in the future.

The production segments on this sector are as follows.

1.1. Hardware components (semiconductor industry):

This production segment includes processors, memories, interconnects and input /output devices.

The semiconductor industry might be included in this segment, started in the sixties and is now a 250 billion USD industry. As in all high-tech sectors, subject to accelerated technological innovation, the history of the industry is quite dynamic. Good and bad P&SCM&BPO, R&D&IP, F&M&A&A and B&M&C decisions have affected the fate of the companies. Two articles from Wikipedia: http://en.wikipedia.org/wiki/Semiconductor_industry and http://en.wikipedia.org/wiki/Semiconductor_sales_leaders_by_year. A catalogue of companies  at the CPU shack Museum: http://www.cpushack.com/history.html andhttp://www.cpushack.com/mergers.html.

Top 20 players in 1987 (ordered by revenues from first to last) were: NEC, Toshiba, Hitachi, Motorola, Texas Instruments, Fujitsu, Philips, National Semiconductors, Mitshubishi, Intel (10th), Matsushita, AMD, Sanyo, SGS-Thomson, ATT, Siemens, OKI, Sharp, Sony, General Electric.

From year 1988 up to 1991 Nec was the leader. Starting from year 1992, Intel Corp has been the leader until 2010 and at present enjoys a very good health, with double market share comparing to the second company in the list, Samsung (which was not present in the 1988 list). So no changes are expected in the short-term, despite the recent moves of competitors (M&A of Renesas Technology and NEC Electronics and others).

In year 2009: Intel, Samsung, Toshiba, Texas Instruments, STMicroelectronics (former SGS-Thomson), Qualcomm, Hynix, AMD, Renesas, Sony, Infineon (former Siemens semiconductor division), Nec, Micron Tech (memory devices),  Elpida memory (NEC+Hitachi), MediaTek, Panasonic, Freescale (former Motorola semiconductors division), NPX (Former Philips), Sharp, Nvidia, Rohm, Marvell and IBM Microelectronics (2,2 billons USD, used to do better but it is not IBM core business).

Of course these lists includes very different players, in company profile (i.e. those working under the traditional IDMs model and those working under the fabless / foundry model), and markets where they act (consumer electronics, wireless/mobile, telecom, IT, automotive / industry or military)but some trends are clear:  some conglomerate to division fissions in USA and EU, probably preludes of M&A, following the trend in Japanese companies. In this industry size matters: R&D investments are huge and economies of scale are advantageous. Koreans and Taiwanese up (the Taiwanese company in the list, MediaTek is not a foundry but a fabless, specialised in wireless. Its new MT 6253, for mobiles, integrates all essential electronic components, including DBB, ABB, power management unit and RF transceiver onto a single chip. The company is an interesting case since it was founded in 1996). I am waiting to see the first R.P. China company in the list. Let´s remind the reader that Tianhe-1A and siblings used imported commodity processors. Some links about Chinese semiconductor industry: an old report http://itri2.org/ttec/aemu/report/c2.pdf; a more recent general market overview and trends http://www.ventureoutsource.com/contract-manufacturing/trends-observations/2009/china-semiconductors-market-at-80-billion-in-2010 and here information about the first general purpose chinese Chip, http://en.wikipedia.org/wiki/Loongson, developed by BLX IC Design Corp, a fabless company, spin-off of a chinese university. It is a MISP (RISC) processor. For processor architecture see http://en.wikipedia.org/wiki/Comparison_of_CPU_architectures and http://www.cpushack.com/ and http://www.cpushack.com/CPU/cpu.html#tableofcontents. There is a clear intention from Chinese authorities to position the P.R. China in this economic sector.

Should we expect a wider adoption if not generalization of the fabless / foundry model in semiconductors (in parallel as what hapenned with ODM/ EMS / OEM in EB, assembly and other contract markets: http://en.wikipedia.org/wiki/Electronics_manufacturing_services and http://www.instat.com/Abstract.asp?ID=318&SKU=IN0703487ASM) model (http://en.wikipedia.org/wiki/Foundry_model) ? Technological production changes (there was design/production entanglement in semicondictors, and some claim it is still the case: http://news.cnet.com/8301-13924_3-10059818-64.html), Globalization (denationalization) / standardization trends and cost / risk management suggests that this might be the case even for big players, but the situation right now is of the coexistence of both IMD (bigger players) and fabless/foundry players (for fabless 13 global leaders see http://www.edn.com/article/512023-13_fabless_IC_suppliers_expected_to_top_1B_in_2010_sales.php, 9 from USA, 3 from Taiwan, 1 from UE, S-Ericsson). The leading semiconductor firm, Intel works now according to the IDM model. It is know that the rise of this firm was linked to the rise of the PC and the extremely robust positioning of the Wintel standard in the PC/workstation segment led by IBM at this time. But this situation will not last forever and Intel  is not yet well positioned (despite recent events such as IP crossing agreement with Nvidia) in several expected fast-growing new segments: wireless/mobile (that is low power ARM «architecture») and games(that is high performance GPU «architectures») that properly combined might eat their territory from above and bellow (see http://insidehpc.com/2011/01/06/nvidia-to-forge-arm-chip-for-pcs-and-servers/ and http://insidehpc.com/2011/01/11/intel-shells-out-1-5bn-for-nvidia-tech-tag-team-against-amd/… So would it be a bad move for Intel to disinvest selling its manufacturing facilities and acquire or develop their own R&D departments for these ARM and GPU segments ? Or have they enough muscle to handle with such R&D challenges without M&A and while keeping manufacturing facilities ? More about the advantages / drawbacks of vertical specialisation in semiconductor industry in this interesting book:http://www.gcbpp.org/files/Academic_Papers/AP_Macher_InnovationGlobalIntro.pdf, and chapter 3 in this book devoted to semiconductors: http://faculty.msb.edu/jtm4/Papers/NAS.STEP.2008.pdf. Some fabless in the list are Qualcomm, Mediatek or Marvell. Leading merchant foundries are: TSMC (http://en.wikipedia.org/wiki/TSMC), UMC(http://en.wikipedia.org/wiki/United_Microelectronics_Corporation), SMIC (http://en.wikipedia.org/wiki/Semiconductor_Manufacturing_International_Corporation)  and GlobalFoundries (http://en.wikipedia.org/wiki/GlobalFoundries) , founded as the divestiture of AMD foundry). Others: http://en.wikipedia.org/wiki/Category:Foundry_semiconductor_companies. Other in mainland China: http://en.wikipedia.org/wiki/Hejian_Technology_Corporation and though we usually associate India with software and outsourcing services, have a look at this: http://en.wikipedia.org/wiki/Hindustan_Semiconductor_Manufacturing_Company. About the manufacturing process: http://www.cpushack.com/MakingWafers.html.

We have seen that there are some big EU players in the TOP20 list, such as STmicroelectronics (former SGS-Thomson), Infineon (former Siemens) or NPX (former Philips). These three sum-up about 70% of EU market (http://pamoga.blogspot.com/2007/12/la-industria-de-los-semiconductores.html). But how is the semiconductor industry going in countries such as Spain or other countries of similar size in EU ? Several good sources for the TIC industry in Spain are Asimelec+ AETIC = AMETIC (since end 2010) and Red.es. Asimelec was a professional association of spanish ICT (TIC) companies, in a very wide sense since it includes content providers, and publish reports such as the one for year 2009: http://www.asimelec.es/media/Ou1/File/informe%20TIC%202010.pdf. This report is interesting for macro-data but uninformative for the particular micro-question we are interested in: are there at present spanish companies whose activity is the design and or manufacturing of chips of any type ? AETIC was another professional association for ICT companies. Both have been merged in AMETIC. From AETIC we quote this two reports about spanish ICT sector:http://www.aetic.es/CLI_AETIC/ftpportalweb/documentos/Presentaci%C3%B3n%20Memoria-datos%202009_20abril2010.pdf and http://www.aetic.es/CLI_AETIC/ftpportalweb/documentos/64_revista.pdf. The second link includes the report «Spanish Digital Tech», where several success stories of spanish firms in  ICT are described. Here we can see that there is some semiconductor industry in Spain in 2011. Two highlights: SIDSA and GIGLE both related with digital TV. I might write something more detailed after i read it. A third report from AETIC is about Information Technologies alone, macro-data: http://www.aetic.es/CLI_AETIC/ftpportalweb/documentos/LasTecnologiasInformacionEspa%c3%b1a2009.pdf. Red.es is a state-owned company whose purpose is to promote the society of information. They publish another survey-based anual report about the ICT sector, this time in english: http://www.red.es/media/registrados/2010-11/1289822032772.pdf?aceptacion=5f2b8f17695f72f577d2733014f2bdab. In page 45 of this report we can see statistics about ICT manufacturing sector (we recall that these are survey data): most manufacturing of ICT in Spain consist on hardware assembly; the number of ICT manufacturing companies with more than 50 employees was 55, 14 with more than 250 employees; 13 companies has a turnover of more than 50 M euros; less than 3% of sales of these companies went outside the EU. The reports page of Red.es is http://www.ontsi.red.es/index.action. Yet another report, a good one, for Madrid region, published by and agency of Madrid local government: https://ireneses.wordpress.com/wp-admin/post.php?post=667&action=edit. Some other assorted links about this are: http://mitsloanblog.typepad.com/inaki/2007/10/semiconductor-c.html from the blog of a USA based ICT entrepreneur interested in physical layer; the post and the comments provides some links to spanish companies;   http://aui.es/IMG/pdf_oportunidades_semiconductores_enter_2009.pdf, http://www.mecalux.es/external/magazine/41531.pdf, http://pamoga.blogspot.com/2007/12/la-industria-de-los-semiconductores.html.  According to the on-line media Computing.es, the total revenues of Top100 ICT companies in 2009 was close to 25 (american) billions euros. Leading companies in this ranking are Indra, HP, IBM, Telefónica, IECISA and Accenture. A link to the report: http://www.gmv.es/empresa_GMV/comunicacion/gmv_medios_2010/prensa/Computing-Lideres_2010_17-03-2010-ranking.pdf. I can not guarantee that the ranking is comprehensive: I miss some companies that must be included in the ICT sector and by year revenues should had been included within the TOP50, such as Amper.  A very good, i would say great, but old (2001) article at Computerworld about spanish ICT sector evolution:http://www.idg.es/computerworld/Innovacion-tecnologica-made-in-Spain,-la-eterna-as/seccion-ges/articulo-120126. There was some semiconductor design and manufacturing in Spain in the 80´s: «AT&T Microelectrónica, que produjo en 1986 en su fábrica de Madrid el primer chip español, un Custom VLSI de tecnología CMOS de 1,75 micras, utilizado en el procesamiento de señales digitales para la transmisión de voz y datos».

We end this section about semiconductors with a paragraph extracted from the blog of one Silicom Valley expert, Mr. Nenni:  «Currently embedded systems account for $200B+ of the $300B+ semiconductor revenues. That is if we can agree that an “embedded system” is an electronic device with a special purpose processor,  including smartphones. The other $100B+ has general purpose processors driving them. Future semiconductor growth will come from the embedded side for sure»  and a post on his blog:http://danielnenni.com/2010/12/19/semiconductor-and-eda-forecasts-2011-2012/. About IP cores:http://en.wikipedia.org/wiki/Semiconductor_intellectual_property_core.

1.2. Embedded systems, microcontroller-based systems  and variants: 

These are specific purpose systems and without end user such as Microcontroller (under any of the available technologies: ASICs, PALs, CPLDs, FPGAs), SOCS (contrary to Microcontrollers SOCS might support incumbent OSs such as Windows or Linux that need external storge), SIPS (http://en.wikipedia.org/wiki/Chip_carrier), or Embedded system . To be enlarged.

From the wikipedia article: «An embedded system is a computer system designed to perform one or a few dedicated functions[1][2] often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts» and «Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure». In short, an embedded system is any sort of non-universal computer.

We are all familiar with some consumer electronics embedded systems: PDAs, mp3 players, mobile phones, video game consoles, digital cameras, DVD players, GPS receivers, and printers; household appliances such as microwave ovens, washing machines and dishwashers, are including embedded systems to provide flexibility, efficiency and features.

The engineering of an embedded system is very different to the engineering of a universal computer system.  From Wikipedia: «Contrasting to the personal computer and server markets, a fairly large number of basic CPU architectures are used»: «A long but still not exhaustive list of common architectures are: 65816, 65C02, 68HC08, 68HC11, 68k, 8051, ARM, AVR, AVR32, Blackfin, C167, Coldfire, COP8, Cortus APS3, eZ8, eZ80, FR-V, H8, HT48, M16C, M32C, MIPS, MSP430, PIC, PowerPC, R8C, SHARC, SPARC, ST6, SuperH, TLCS-47, TLCS-870, TLCS-900, Tricore, V850, x86, XE8000, Z80, AsAP etc».

We include in this category the HTPC: http://en.wikipedia.org/wiki/Home_theater_PC , smartphones (http://en.wikipedia.org/wiki/Smartphone), mobile tablets (http://en.wikipedia.org/wiki/Tablet_computer) or smartbooks (http://en.wikipedia.org/wiki/Smartbook). But it is clear that the smartphones segment is just in the frontier, in transition to become a small variant of the PC. As for the bigger systems we will see later  (PC´s…), in the mobile high-end smartphones segment complete substitutive platforms or ecosystems are emerging: ARM-iPhone-iOS-ObjectiveC-AppleStore platform of Apple (http://en.wikipedia.org/wiki/IOS_(Apple),  ARM/MIPS/PA/x86-AndroidOS-AndroidMarket Platform of the Open Handset alliance (http://en.wikipedia.org/wiki/Android_(operating_system), WebOs-JavaScript/HTML-Palm-Palm App Catalogue of HP, (http://en.wikipedia.org/wiki/WebOS),  Black Berry platform of RIM, and the brand new Nokia-Windows Phone7 platform alliance.  For more information about mobile OS: http://en.wikipedia.org/wiki/Mobile_platform.

Personal computers(PC) and variants:

These are general purpose uniuser microcomputers, manufactured in general with commodity components, regarding both  hardware and OS software.  We all know what a PC is and what are its capabilities.For a timeline see: http://www.islandnet.com/~kpolsson/comphist/

We include in this category the several mobile variants such as laptops or notebooks (http://en.wikipedia.org/wiki/Notebook_computer#Notebook), variants with different input / output  technologies such as tablets PC or «disabled» version such as netbooks (http://en.wikipedia.org/wiki/Netbook).

A panorama of last decade technological breakthroughs in the PC market:  http://www.xbitlabs.com/articles/other/display/breakthroughs-2010.html.

1.3. Workstations:

These are general purpose, uniuser, hardware-enhanced (see http://pctimeline.info/workstation/) minicomputers systems: http://en.wikipedia.org/wiki/Workstation.  From this article «Historically, midrange computers have been sold to small to medium-sized businesses as their main computer, and to larger enterprises for branch- or department-level operations.  Since 1990s, when the client–server model of computing became predominant, computers of the comparable class are instead universally known as servers to recognize that they «serve» end users at their «client» computers», «presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI» and «three types of products are marketed under the workstation umbrella:

  • Workstation blade systems (IBM HC10 or Hewlett-Packard xw460c. Sun Visualization System is akin to these solutions)
  • Ultra high-end workstation (SGI Virtu VS3xx)
  • High-end deskside system two ways capable x64 systems».

Upsurge of workstations are expected to follow cloud computing, if it ever happens.

1.4. Servers: 

These are an evolution of general purpose systems, specialized in some function, multiuser, minicomputer systems. Wikipedia article: http://en.wikipedia.org/wiki/Computer_server. They are in general used as a web-server, database-server, file-server (concurrent today with netapps) or print-servers, in accordance to the client-server paradigm.

1.5. Mainframes:

These are general purpose, multiuser systems. Wikipedia article: http://en.wikipedia.org/wiki/Mainframe_computer.

These computer systems are mainly used by big corporations or governments to deal with transaction processing (information processing that is divided into individual, indivisible operations, called transactions) or to manage Enterprise Resource Planning applications.

1.6. Supercomputers.

We will give detail about this segment bellow.

1.7. System software:

In this Wikipedia article about the  Usage share of operating systems,  there is a very interesting table with the approximate market share of the different OS, in the different segments we are describing in this post. I see a fierce competition in the brand new mobile market and some stability, with dominant  products, in other segments. In HPC and server markets the Linux distributions has the greatest market share. In HPC Linux  has almost completely replaced Unix in less than ten years and the Microsoft product, which i suppose is proprietary and was launched several years ago, is not gaining momentum. It should be noted that free is not equivalent to gratis in this context. It seems that consumers unskilled in software programming such as those in consumer electronics and SME prefer proprietary, but in segments where the users are skilled in this field (servers, HPC) and the field is subject to research shocks, the free software is prefered.

1.8. Tools or software CS professionals (programming languages, solution stacks etc…):

This segment includes the programing languages and software products used by CS professionals. Not surprisingly, of all computer system sector segments this one is the most atomized and volatile, together maybe with application software (see bellow) segment.

A good wikipedia article on programming languages: http://en.wikipedia.org/wiki/Computer_language. It is difficult to find a classification of programming languages that does not blurs soon. A programming language is a human to computer device communication tool. The computer device could be or a general purpose CPU or a microcontroller. Some languages are more machine aware (machine, assembly, 1st and 2nd generation languages) some more user aware languages (high level, 3rd and 4th generation), some are general purpose (see http://en.wikipedia.org/wiki/General-purpose_programming_language) some more domain specific (see  http://en.wikipedia.org/wiki/Domain-specific_programming_language and http://en.wikipedia.org/wiki/Programming_domain or http://en.wikipedia.org/wiki/Fourth-generation_programming_language). We are still waiting for the 100% user aware-general purpose 4th generation language (that some call 5th generation): that is the one that our grandmothers could use to program any machine.

Whatever the language awareness or purpose, an expression that makes a computer system to effect some task when pushing the «enter» butom is a software product. On the software products field, the proprietary /open / free divide seems to be important. Following this divide and despite atomization, some so-called software stacks have emerged which vertically integrates software products from OS to client application passing trough, for instance, web server sosftware, such as the free LAMP software stacks or software architecture in the web server software segment. LAMP, acronym of Linux + Apache + MySQL + PHP/Perl/Python is the most used software stack for web service software, used for instance by Facebook as front-end. Is instead of Linux other OS are used alternative free stack LAMP variants are WAMP (Windows), MAMP (Mac OS), SAMP (Solaris) or OAMP (Open BSD). Other free alternatives are LEXA or LYME. In the other extreme a fully proprietary alternative is the WINS solution stack Windows Server OS + IIS + .Net + SQL server database. Another Hybrid is the WIMP. More about solution stacks at: http://en.wikipedia.org/wiki/Solution_stack. For a comparison of web server software see http://en.wikipedia.org/wiki/Comparison_of_web_servers and for market shares as per a january 2011 survey by Netcraft see: http://news.netcraft.com/archives/category/web-server-survey/ .

Another instance of the atomization in another subsegment of the software as industry tools segment, the EDA companies:http://en.wikipedia.org/wiki/Electronic_design_automation and  http://en.wikipedia.org/wiki/List_of_EDA_companies and http://en.wikipedia.org/wiki/Category:Electronic_design_automation_companies. (section to be enlarged).

1.9. Application software:

The main divide in this segment are packaged/custom software ()  and business software /entertainment software and within business, vertical /horizontal. Even if for some enterprise applications (ERPs) and other subsegments there are clear leaders some sub-segment might be quite atomized. See for instance available software at the Content  Management Systems segment: http://en.wikipedia.org/wiki/List_of_content_management_systems. Here USA has worked under SaaProduct model and software friendly IP Government policy, which has led to undubitable success, and EU under a SaaS-like model with weak IP protection: http://www.aetic.es/CLI_AETIC/ftpportalweb/documentos/POSICIONAMIENTO%20EUROPEO%20SOFTWARE(nov2008)_2.pdf, with less success. Of the TOP100 software companies only 20 are UE based: SAP (4th), Dassault Systemes, Sage, Misys, Business Objects, Software AG, Philips, Cegedim Dendrite, Unit4agresso, Exact, Visma. See also the following global ranking: http://www.softwaretop100.org/global-software-top-100-edition-2010-the-highlights. For more details about EU software industry see Truffle´s yearly ranking.

1.10. Software as a Service:

1.11. General conclusions:

It must be clear that while the differences between segments has been demand driven (caused mainly by clients / applications different needs) there are also production differences, that is the products of each segment requires different engineering techniques (cost which dominates now the cost structure of these products). Not so different in general so that the same company can not take production advantages developing activities in different segments: we will find the same leading actors in almost all market segments.

The market size for some segments seems to be shrinking (workstations, mainframes) while for others it seems to be expanding (some embedding systems, servers). Other segments seem to have reached maturity (OS software for PC segment). But one thing is clear: the frontier of computer systems research lies in the supercomputing / HPC segment. As soon as a great technological advance is made in this segment it spreads down.

In the rest of this post we will restrict to the HPC hardware and software segment, which include servers, mainframes and supercomputers. The supercomputing segment seems to be the less developed from the point of view of marketing techniques: it is the most reduced in size within HPC (around 15%) and its clients are very far in style from the average consumer in mass markets. In short, as said in previous post, HPC market looks more like an infrastructure market than a product market (as the whole computer segment looked like before the PC revolution).

But some forces might change this situation in the future:  http://www.hpcwire.com/topic/interconnects/The-Week-in-Review-20100923.html?page=1, link from which we highlight: «For decades, the largest U.S. automotive and aerospace manufacturers have used supercomputing technologies to pursue «Digital Manufacturing» processes. The programs they run allow them to shorten time-to-market, improve product quality, and reduce costs, by designing their products on a computer before they build expensive physical prototypes. With over 300,000 small- and mid-sized manufacturers (SMM) based in the U.S., the study conducted by NCMS and Intersect360 Research probed the reasons why the digital manufacturing concept has not been broadly adopted outside the top echelon».

III. THE HPC SECTOR:  OVERVIEW.

3.1 Market-oriented classification of  HPC systems, size and dynamics:

Preliminaries: The best way to think about a HPC system is to think that it is a system of systems. You can make an HPC system with single processor units,   memory units, I/O units, all interconnected physically close and sharing the same OS, but you can also interconnect higher level computer systems such as PC´s, workstations, servers, or even supercomputers as in the PRACE project, each one using its own OS.

As  an instance, a HPC system proposed for use in a fine-grained parallel application (applications were the basic units communicate with each other frequently): the  Higgs  boson search in the LHC (http://alexandria.tue.nl/extra2/200310794.pdf). In this case the purpose of the system is to filter 2-D images of proton-proton collisions. They propose to use commodity PC´s as basic processing units, further enhanced. According to the paper the  topology most suitable to fit the interprocessor communication needed for the application is a CLOS topology, and the NIC of choice to handle such topology was a GigaE (Gigabit Ethernet), with recommendation of some NIC fine tuning, in order the commodity NIC can manage efficiently the selected topology (see section  7 of the paper):

«In addition to buying a large Ethernet switch for the trigger, one can also avoid using the spanning tree algorithm to allow topologies different from trees. The spanning tree algorithm is part of the IEEE 802.1d standard that defines how a network of Ethernet switches cooperates to learn the connectivity of the network and the location of the endnodes to automatically set up the routing tables within each switch. However, the topology supported by this algorithm is limited to a tree. Saka [102] has demonstrated that, if the automatic configuration can be turned off, and explicit configuration is used, any network topology can be supported. This allows Ethernet to be organised as a Clos network, a topology which has already proven to be suitable for the ATLAS trigger[55]. The feature to turn off the automatic configuration is not common for Ethernet switches.»

At the beginning of present decade there has been some debate in the community of HPC practitioners regarding the classification of HPC systems (see Bell and Gray paper http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.4722  and Dongarra et all http://www.computer.org/portal/web/csdl/doi/10.1109/MCSE.2005.34. These two papers provide an interesting description of the situation and trends in HPC market in years 2001-2003. Of course at this time, the today´s emergent power-efficiency concerns were scarce or absent at this time. According to the last TOP500 release, all supercomputing systems falls into 3 categories

–cluster: this is a HPC system made of «independent operational components integrated by some medium for coordinated and cooperative behaviour» (Sterling). Each unit is therefore a complete computer, which could work as a computer independently even if it was not connected to the cluster. Clusters are usually made with commodity components: third-party commercial off the shelf processors, memories, i/O, interconnect and OS.

–MPP: Massively parallel processors.

–constellation: it is a system where the number of processors per node is greater than the number of nodes.

To these three high end parallel computing kind of systems one may add the systems of the distributed computing paradigm (grids computing: http://en.wikipedia.org/wiki/Grid_computing) suitable for embarrassingly parallel applications where security is not an issue and the brand new trend called  Cloud computing, suitable for elastic / sporadic HPC demand. On the low-end we must quote also the multicore/many core mainstream paradigm (http://en.wikipedia.org/wiki/Multi-core_(computing).

Overall HPC market segmentation:

After these preliminary comments how I see at present (not definite view yet) the situation of the HPC sector can be summarized as follows: all systems in this sector could be technically classified  in two dimensions,

–the general-purpose / special-purpose dimension and

–the simple (homogeneous)  / complex (heterogeneous) dimension.

It seems that at present the market is subject to both vertically (segmentation by quality) and horizontally differentiation, that is the market is partitioning into two  segments, the lower-end general-purpose commodity build-it-yourself HPC systems and the higher-end of customized HPC systems, at the extreme of which we find the special purpose HPC systems.

Users in the commodity segment are speed satisficer (http://en.wikipedia.org/wiki/Satisficer) and cost minimizer; users in the custom segment are (should be) speed maximizers and cost satisficers.

Despite the availability of some commodity technologies (Processor x86-64 architecture (http://en.wikipedia.org/wiki/X86); Interconnect Infiniband /  GEthernet; OS-LINUX / Windows; Distributed memory-Parallel Programming paradigm-MPI), some problems prevents this commodity segment to become a mass market (at least in the SME users segment): power consumption and software complexity of parallel applications might be the most important.

Hetero-complex / homo-simple dimension Special Purpose Research trends General Purpose Research trends
 More homogeneous and Simple systems ASIC   Multi-core   Scalability. Edge: Intel SCC, 48 x86 cores. Manufactured for software development. 
  FPGA   Many-core  
      SMP / Constellations  
      CP/GPU and GPGPU systems  Hot. AMD APUs: Accelerated Processor Unit.
   Storage servers,   Servers, Clusters  Power efficiency, Software simplification
  Specialized HPC systems. From prototype to  niche market ? MPP systems  Interconnect optimization, power efficiency, software simplification
 Nore heterogeneous and complex systems     Grid  Mature technology?

In the low end, to include more cores on the same die is the research trend and Intel is leading it: http://www.xbitlabs.com/news/cpu/display/20100521044530_Intel_Demos_System_Based_on_48_Core_Chip.html. This prototype has been designed within the Intel´s Tera-Scale Computing Research Program. For Intel, the target of 1000 cores in a chip is under the present technology reach (InsideHPC and http://www.somoslibres.org/modules.php?name=News&file=article&sid=4060).

Indeed, this might be possible, but is this the best performance and cost and power efficiency you can get for 1000 cores ? If I was Intel I  would not  sleep quietly until I was sure that my implementation was the optimal for these parameters and these given resources…An article about new products: http://www.hpcwire.com/news/Manycore-Ahead-111569069.html

The GPGPU is another hot trend on the general purpose side and in this case it seems AMD and Nvidia are leading it:http://www.xbitlabs.com/articles/cpu/display/amd-fusion-interview-2010.html and http://www.xbitlabs.com/articles/other/display/breakthroughs-2010_13.html#sect0.   

The specialized HPC systems segment interest me a lot. While until now, due to the general purpose high pace of technological advances has made this segment to remain, against players wishes I suppose, this market segment more a prototype market (when a special purpose prototype was ready to be launched into the market the available general purpose commodity were already quicker), there are some niches that in our days might justify investment in some application specialized niches. The recent moves by Riken and D. Shaw Research´s and its special purpose prototypes, respectively MDGRAPE-3 and  Anton, for molecular dynamics (protein folding ?) might be an example of this trend. It is a high risk strategy (it has been so in the past), but if succesful can yield high benefits (not only economic, but also scientific rewards are expected).

Side to side with this special purpose HPC systems for biotech applications (i.e. molecular dynamics) still in the prototype phase, we can find the now highly competitive market for gene sequencing machines. While this sector fits more the subsector of «devices which combine electronic and mechanic or other physico-chemical or biological technologies» that we explicitly excluded of this post it seems worth to mention it here and dedicate to it some space. From a wikipedia article: «In 2008 and 2009, both public and private companies have emerged that are now in a competitive race to be the first mover to provide a full genome sequencing platform that is commercially robust for both research and clinical use,[37] including Illumina,[38] Knome,[39] Sequenom,[40] 454 Life Sciences,[41] Pacific Biosciences,[42] Complete Genomics,[43] Intelligent Bio-Systems,[44] Genome Corp.,[45] ION Torrent Systems,[46] and Helicos Biosciences.[47] These companies are heavily financed and backed by venture capitalists, hedge funds, investment banks and, in the case of Illumina, Sequenom and 454, heavy re-investment of revenue into research and development, mergers and acquisitions, and licensing initiatives.[48 «. According to this paper the cornerstone of this biotech research and industry thread, microarrays, was developped mimicking microprocessor industry development. Now micrarrays (also called biochips), which can be seen as massive biological data generators, as LHC is a massive particle physics data generator, must be combined with the power of HPC to extract knowledge from row data, exactly as LHC HPC farms do. Of course there is an important difference between physical data provided by LHC and biological data provided by microarrays: physical data knowledge extraction is backed by a good theory, maybe not the definite one but good enough; there is no good theory for interpreting biogical data. Put the most intelligent agent in a room without any data and its knowledge output will be zero. Put the greatest database with the greatest computational power in a room, without a good theory and the knowledge output will be exactly the same, zero.  Another field which is starting to use HPC, but which lacks a good theory yet is Neuroscience: see Blue Brain project   http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19092 , Conectomme project: http://www.humanconnectomeproject.org/overview/ or the Genes to Cognition Database http://www.genes2cognition.org/careers.html This last group is following a line of research which lead to a recent breaktrough in neurosciences.   An interesting article about supercomputing in life sciences at: http://biomedicalcomputationreview.org/2/4/all.pdf. Also of interest:  http://faculty.wcas.northwestern.edu/~hji403/Publications/Lee_etal_IJCBDD_2008.pdf and http://www.tcgls.com/news281105.htm. These two parallelization methods (microarrays and HPC) might not be that far since even our well known HPC player, IBM is on the race: «In October 2009, IBM announced that they were also in the heated race to provide full genome sequencing for under $1,000, with their ultimate goal being able to provide their service for $100 per genome.[74] IBM’s full genome sequencing technology, which uses nanopores, is known as the «DNA Transistor».[75]«. At present it is also unclear how usefull a full genome sequencing would be since predictive medicine is still just a promise. Maybe with HPC and a hopefully good biomedical theory can fuel this new promise field. A debate about the future of the cost of sequencing genomes and its utility here http://blogs.forbes.com/matthewherper/2011/01/06/why-you-cant-have-your-1000-genome/, here http://scienceblogs.com/geneticfuture/2011/01/why_you_can_have_your_1000_gen.php , here http://blogs.forbes.com/matthewherper/2011/01/07/debating-the-1000-genome/ and here http://johnhawks.net/weblog/topics/biotech/whole-genome/sequencing-1000-dollar-genomes-2011.html.  More about the business of personalized genomics:  http://carey.jhu.edu/one/2010/fall/genomics-revolutions/.

The vertical differentiation of HPC market, the present complexity of research challenges in the high-end to get exascale systems (that is, the  huge R&D investments) and the opportunity to going to mass-SME in the commodity segment might cause that incumbents to concentrate its energies in this HPC commodity segment, where economic returns might be higher, and let free this high-end niche. I wonder besides MD which other commercially interesting niche markets exists for special purpose HPC systems.

Overall HPC Market size: .

Two studies from IDC and Intersect 360 point to the fact that the market size of HCP industry in 2009 was close to 9 billions USD. According to Tabor research data, the supercomputing market in 2008 was 7,5 billions usd(http://www.theregister.co.uk/2009/05/28/sicortex_assets/).

On the other hand in this article, with data from Intersect360 they estimate that the HPC market in total (servers, storage, networks and systems /application  software) was valued at 19 billion USD 2008 (http://www.thefreelibrary.com/Dell+High+Performance+Cluster+Using+Intel+Xeon+Technology+Powers…-a0212123275). From the IDC press release for the Worldwide High-Performance Technical Server QView:

«Factory revenue for the high performance computing (HPC) technical server market declined by 11.6% in 2009 to $8.6 billion, down from $9.7 billion in 2008, according to the International Data Corporation (IDC)» and «A bright spot was the «Supercomputers» segment for HPC systems priced at $500,000 and up, which grew by 25% to reach $3.4 billion during the difficult year. Fueled by multiple transactions in the $100 million range, the top bracket in this segment, for HPC systems priced above $3 million, grew even faster, expanding by a whopping 65% to reach $1.0 billion. At the other end of the price spectrum, revenue from «Workgroup» HPC systems priced below $100,000 slid 33% to $1.7 billion as buyers delayed or canceled some planned acquisitions in this segment that is characterized by purchases based on shorter sales cycles and more discretionary spending».

More data on this link: http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22263210&sectionId=null&elementId=null&pageType=SYNOPSIS. It is unclear if their sector definition fits exactly ours.

How big is a 9 billions revenue /year  market ? Not so big: all 500 Fortune list of biggest companies were in 2010 over this revenues per year amount. Not so small: the nominal GDP of around 30% of world countries are this size or smaller.  Of course most, small countries.

The several estimations might differ because of the inclusion or not of software. We can also compare the size of HPC market with the size of the overall Info Tech industry (excluding telecom, mobile and wireless or e-business) at Plunkett Research site: http://www.plunkettresearch.com/Computers%20software%20technology%20market%20research/industry%20statistics. The global IT market in 2009 was about 144 trillions, a figure  which includes 761 trillions of IT services. Services still amoun for around 50% of IT sector.

Overall HPC Market dynamics:

An interesting blog post about the market forces that shape the structure and dynamics of this industry: http://insidehpc.com/2010/10/19/why-the-hpc-growth-equation-hasn%e2%80%99t-added-up/. And a study by Intersect 360 Research, a consulting company specialized in HPC. http://www.intersect360.com/industry/research.php?id=35 about the barriers to adopt HPC technologies by SMEs.

3.2 Hardware processors submarket:

http://www.intersect360.com/industry/research.php?id=41

–CPU processors.

A major driver for R&D at present in the low end datcenter market is energy efficiency and for general purpose CPU´s an important parameter is the instruction set architecture (ISA), with two well-known paradigms: RISC and CISC (respectively reduced instruction set and complex instruction set). RISC are more power efficient desings and CISC are better for performance. The major players in this market are Intel, AMD, IBM (Power series), Nvidia, Oracle. Intel x86 (x64) is the main GP CISC vendor, followed by AMD; while most of the the others licence RISC technologies from ARM or make their own RISC designs.  To be enlarged.

–GPU processors.

A PHD thesis about the new trend, heterogeneous architectures combining CPU´s and GPU´s: http://babrodtk.at.ifi.uio.no/files/publications/brodtkorb_phd_thesis.pdf.

3.3. Hardware storage submarket:

General data about this sub-market from Intersect360 http://www.intersect360.com/industry/research.php?id=2. A market trends forecasting for year 2011 at Storage Enterprise Forum: http://www.enterprisestorageforum.com/features/article.php/3917876/Top-10-Storage-Predictions-for-2011-and-Beyond.htm. Sections of this web site are: storage back up and recovery, Storage Hardware, Storage Networking, Storage Management, Storage Services, Storage Technology. From this same site, the storage basics: http://www.enterprisestorageforum.com/reports/index.php/20401.  It is a series of articles starting 2001 about storage.

Some of the names to store in your mind about this submarket are: kb, kB, MB, GB as a storage capacity measure; kbps, Mbps, Gbps as an amount of data transfer through a channel / network  measure;  RAID (http://www.webopedia.com/TERM/R/RAID.html), Disk Array, DAS, NAS (a file server), SAN (a networked storage system) as storage systems or FC (data transmission protocol for storage networks), FC-AL, FCoE, FCIP, SCSI (block protocol), ATA (block protocol), iSCSI, 10GE (data transmission protocol for LANs), 40GE (id.), IB (HPC system network data transmission protocol) protocol specifications or standards or combinations such as IP-SAN, FC-SAN…Relevant links here are http://es.wikipedia.org/wiki/Modelo_OSI and http://en.wikipedia.org/wiki/Internet_Protocol_Suite and .

Some key performance and parameters to take into account in storage are: bandwidth, throughput, scalability, software environment requests, usability and management.

«According to the analyst firm IDC, the NAS market grew 11.4 percent year over year to $668 million, led by EMC, with 38.1 percent revenue share and followed by Network Appliance (NetApp) with 27.2 percent share». Some of the players in this storage solutions market in USA are EMC, NetAPP, Iomega, Micronet or Pillar Data. More interesting information here: http://www.enterprisestorageforum.com/sans/features/article.php/11188_3736971_2/A-Small-Business-Guide-to-Network-Attached-Storage.htm.

SAN networks are not exactly as Internet, WANs or LANs:  «Remember, there is no protocol for SAN routing. Everything we’re talking about here is vendor specific, unlikely to interoperate with other vendors’ products, and subject to interpretation and bias when evaluating the effectiveness of such mechanisms».http://www.enterprisestorageforum.com/sans/features/article.php/3735451 . «A storage network is any network that’s designed to transport block-level storage protocols».  «

«There is one important take-away from the NAS world, however. That is the difference between block-level storage protocols and file-level protocols. A block-level protocol is SCSI or ATA, where as file protocols can be anything from NFS or CIFS to HTTP. Block protocols ship an entire disk block at once, and it gets written to disk as a whole block. File-level protocols could ship one byte at a time, and depend on the lower-level block protocol to assemble the bytes into disk blocks».http://www.enterprisestorageforum.com/ipstorage/features/article.php/3701021.

Another topic included in storage are clustered or distributed file systems (http://en.wikipedia.org/wiki/Distributed_file_system) such as IBM´s GPFS (http://en.wikipedia.org/wiki/IBM_General_Parallel_File_System) or Xyratec´s Luster(http://en.wikipedia.org/wiki/Lustre_(file_system)). to be enlarged including other dfs´s in the HPC market.

3.4. Hardware Interconnect submarket:

We include here information about NOC´s for multicore systems, system interconnect, storage network and LANs (http://www.intersect360.com/industry/research.php?id=3).

The components of the interconnection network in a cluster are NICs, links (a single wire or multiple parallel cables or optical fiber),  and network switches which interconnect a number of channels and handle routing operations between them. An interconnection network is therefore a combination of physical devices and software, both following a given industrial standard (Ethernet, Infiniband).

A recent market oriented view of the interconnect segment: http://files.shareholder.com/downloads/MLNX/0x0x292259/35E7EA6A-1E09-4F0F-BA55-E4E5C80FE453/HPC_Market_and_Interconnects_Report.pdf. A more technically oriented report about the performance of leading interconnects at:  http://www.chelsio.com/assetlibrary/whitepapers/HPC-APPS-PERF-IBM.pdf.

Some architecture engineering point of view NOC papers:ftp://ftp.cs.utexas.edu/pub/dburger/papers/ASPLOS02.pdf,  http://www.cs.utah.edu/~rajeev/pubs/hpca10a.pdf,   http://www.cs.utah.edu/~rajeev/pubs/hpca10a.pdf, http://cva.stanford.edu/publications/2006/jbalfour_ICS.pdf, and  http://www.ece.cmu.edu/~omutlu/pub/bless_isca09.pdf. This last one for bufferless routing NOC´s, a similar philosophy that the one we are proposing bellow (see some technicalities on interconnection networks).

Interconnect market data size: market size 2009 2 Billions. Expected to grow to 2,5 billions up to 2013 (source IDC, ).

A list of interconnect devices and its bandwidths: http://en.wikipedia.org/wiki/List_of_device_bandwidths.

A recent breakthrough by IBM as reported by computer world:  http://www.computerworld.com/s/article/9198799/IBM_chip_breakthrough_may_lead_to_exascale_supercomputers?source=toc

A technical paper about interconnection networks: http://ccr.sigcomm.org/online/files/p63-alfares.pdf. And yet another where we can see how signaling or packaging technologies affects optimal interconnection network topology. Changes in packaging technologies has also effects on prefered topologies.

Some technicalities on interconnection networks:  

As it will be apparent I´m specially interested in technological and market issues regarding interconnection networks, since it is here where the results of the patent application might find its road to practicalities. Sporadic ruminations emerge in my mind about how to base efficiently the whole communications issue of HPC systems (collective or point to point) on a bunch (as much as you can route simultaneously in the same network without interfering) of hamiltonian cycles or paths traversing Cayley Digraph/Graph-based interconnection networks topologies.

To this purpose i´m re-reading the relevant chapters of the book from Professors Duato, Yalamanchili and Ni. This is a great book, written from an engineering perspective but quite comprehensive. See for instance page 8 of introduction with a complete empirical classification of interconnection networks, or the classification of point-to-point routing algorithms in chapter 4, page 140. Chapters are devoted to: message switching layer, deadlock and livelock and starvation, routing algorithms, collective communication support, fault tolerant routing, network architectures, messaging layer software and performance evaluation.

Against my ruminations, in this book authors state clearly first that «in addition to the topologies defined above (they refer to mesh, torus, hypercube or tree like) many other topologies have been proposed in the literature. Most of them were proposed with the goal of minimizing the network diameter for a given number of nodes and node degree. As we will see in chapter 2, for pipelined switching techniques, network latency is almost insensitive to network diameter specially when messages are long. So it is unlikely that these topologies are implemented» and this includes of course most of the application patent topologies; secondly that «deadlock avoidance is considerably simplified if unicast and multicast routing use the same routing algorithm. Moreover using the same routing hardware for unicast and multicast routing allow the design of compact and fast routers. The Hamiltonian path-based routing algorithms proposed in previous sections improve performance over multiple unicast routing. However their development has been in a different track compared to e-cube and adaptive routing. Moreover it makes  no sense sacrificing the performance of unicast messages to improve the performance of multicast messages, which usually represent a much smaller percentage of network traffic. Thus, as indicated in [269](a bibliographic reference) it is unlikely that a system in the near future will be able to take advantage of Hamiltonian path-based routing«.

To be sincere, at present I have not enough elements to ponder this two statements. I knew them before starting the patent application and today, they does not stop my ruminations, which might be totally wrong, but can be summarized as follows:

–if hamiltonian path-based routing algorithms for unicast/multicast operations might improve performance comparing to multiple unicast routing algorithms,      

if instead of the routing-to-the-node-demand-like, we use simultaneous hamiltonian-cyle / path carrier signals which might deterministically traverse the network with or without node-messages and over which node-messages can be encoded and transported to desired unicast or multicast destinations when needed, this method might make interconnection network more predictable, more bandwidth-flexible (if carriers start to be always busy and therefore messages delayed, just insert a new hamiltonian carrier into the network), not less efficient (it should be noted that the hamiltonian carriers might be selected so that optimum unicast for all pairs of points can be approximated; the more hamiltonian carriers in the network the better the approximation) and easier to be parallel programmed. This part of the ruminations is the one  I have to underpin from a mathematical and hardware / software engineering point of view. Of course you do not need to implement physically the underground transportation-like simultaneous hamiltonian  routing. It might be enough that each node before dropping messages to the network assumes that such underground system exist, and that all nodes share the same system. The availability of multiple hamiltonian traversals in the network would be a key feature of the whole communications system. To add a new hamiltonian traversal in an on-going application as communications needs grows, play the same role as to add an additional unit of currency in an on-going economy as it grows. It is known that with only one (currency) unit all the transactions that the members of a big economy want to effect can be done, but they would need huge amount of time and  a huge memory for keeping track of credit agreements.  With two currency units the economic system will take half time, ¿ half memory?  etc…According to this for a given number of precessors, the higher the number of hamiltonian traversals, the higher the bandwidth the communication system can deliver to demmand. In economic systems there is an optimum (equilibrium) for the amount of currency units, and surelly there is so for dealing with communications.  I´m collecting data in order to check if the number of hamiltonian traversals of well known topologies are known (hopefully a formula for a given family and digraph/graph degree) and compare them.

and finally if letting constant the number of nodes  or within a limited upper and lower bound of nodes, we know that topologies based on 2-generated Cayley digraphs such as those  described in the patent application has much more hamiltonian traversals than the usual topologies (mesh, torus, hypercubes,…). That is the case at least comparing with 2-d toruses (which are abelian 2-generated cayley digraphs) and by extension for 2-d meshes (which are pruned 2-d toruses). I know very little about the hamiltonian traversal problem for meshes and I let pending collecting papers about this problem.  Some papers i´ve found after a first [mesh hamiltonian] google search:  http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F7362%2F19970%2F00923599.pdf%3Farnumber%3D923599&authDecision=-203 and http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WP0-50J3DBJ-2&_user=10&_coverDate=10%2F31%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1593074001&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=ef0cb3baaad89c819c5324aab8236716&searchtype=a

another: http://www.cs.uga.edu/~mir/files/pyramid.pdf, and http://www.computer.org/portal/web/csdl/doi/10.1109/ICSPS.2009.151 and http://portal.acm.org/citation.cfm?id=1126629 or this one http://www.ingentaconnect.com/content/els/01678191/2002/00000028/00000009/art00135 or http://www.dominic-schupke.de/papers/schupke_CL2004-1322_preprint.pdf. Most of these papers are related with wireless networks and fault-tolerance.

then, future interconnections networks designers might change the «optimize to unicast routing is better than optimize to multicast routing» principle.  Of course this possible method might need a huge amount of pre-computation and possibly also huge amount of storage space (lookup tables).

I´ve seen that the issue of the availability of disjoint hamiltonian traversals in the same network has been extensively studied for popular interconnect topologies in the constext of fault tolerance, which will be more and more important in exascale times, in the mathematical and engineering community: meshes (a pruned torus), circulants, toruses (of which hypercubes are just an special case), stars (a non hamiltonian Cayley Graph made up with involutions), de Bruijn and Kautz networks (http://pl.atyp.us/wordpress/?p=1275)…. I do not comment about this here because the extension of this part is growing too long and the Sicortex offshoot will be followed soon by a technical post about interconnects where i will migrate all this content.

–To above ruminations, we must add a new idea about how to handle memory wall, again posibly completely wrong: it is known that as system scales data movement (memory to cpu, back and forth) becomes the biggest bottleneck. Therefore in these systems, does it make sense to keep a data structure that represent the problem in processor or node  main memory, while the system is working ? IMHO, while working data must be either in individual nodes cache, either moving in the network from node to node. Using the simultaneous hamiltonian routing scheme sketched above, some hamiltonin traversals could be used to encode the input in and some others to handle internode communication. At the moment, just an idea…Regarding this the Gordon SDSC system architecture might be of interest:  http://www.sdsc.edu/News%20Items/PR110409_gordon.html.

3.5. System Software (OS) / middleware:

Most HPC systems use Linux as node software. Other OS available for this computer system segment are: PDTE.

3.6. Programming Languages, software tools and Application software for parallel computing:

MPI, PVM.

Intersect360 study about the HPC applications market:  http://www.intersect360.com/industry/research.php?id=34  .

The actors http://en.wikipedia.org/wiki/Independent_software_vendor.

3.7. HPC as a service (GRID, SOAS, SaaS, CLOUD):

http://www.hpcinthecloud.com/news/Microsoft-Takes-Supercomputing-To-The-Cloud-94178284.html and http://www.it-tude.com/cloudservicesproducts.html, and http://www.it-tude.com/whatiscloudcomputing.html and http://www.it-tude.com/aboutgrid.html and http://www.it-tude.com/types-of-grids.html.

IV. HPC SECTOR. VENDORS.

–Systems:

Dawning, Lenovo, NUDT, Inspur, Sunwey (R.P. China), T-Platform (Russia), Fujitsu, Hitachi, NEC (Japan), Bull (EU; the EESI report confirms that Bull is the unique vendor of supercomputing systems in the  EU;  as said above the HPC market is wider than supercomputing, so it is still possible that there are some vendors of HPC systems), IBM, HP, Cray (http://biz.yahoo.com/e/100507/cray10-q.html and http://www.nccs.gov/wp-content/training/cray_meeting_pdfs/Cray_Tech_Workshop_sscott_2_26_07.pdf) (USA), Dell (and his path to the low end mass market:http://www.hpcwire.com/news/Supercomputing-Expands-into-Smaller-Markets-113111289.html), Quadrics (shutdown of operations in 2009), SiCortex (shutdown of operations at 2009),  (to be continued)…

–Processors:

Renesas (specially strong at micro controllers), Intel (a recent <1 billion usd agreement has solved their IP issue with Nvidia), AMD , ARM Holdings(http://en.wikipedia.org/wiki/ARM_Holdings; from this article: «Unlike other microprocessor corporations such as AMD, Intel, Freescale (formerly Motorola) and Renesas (formerly Hitachi and Mitsubishi Electric),[28] ARM only licenses its technology as intellectual property (IP), rather than manufacturing its own CPUs. Thus, there are a few dozen companies making processors based on ARM’s designs. Intel, Texas Instruments, Freescale and Renesas have all licensed ARM technology. In 2007, 2.9 billion chips based on an ARM design were manufactured». ARM designs RISC architecture processors, suitable for mobile devices), IBM-Power series, (to be continued)… http://en.wikipedia.org/wiki/List_of_the_largest_global_technology_companies

–Interconnects / networking:

Myrinet, Quadrics (closed operations in 2010), Cisco, …(to be continued)…

–Storage:

Cisco

–OS/Middleware:

Microsoft (Windows family), Unix BSD, Linux, Sun (Solaris, Unix-like OS),

–Tools / Software Applications:

Web server software: Apache (aprox 60% market share), Microsoft (aprox 22% market share), Sun, Nginx, Google, NCSA, Lighttpd, see http://news.netcraft.com/archives/category/web-server-survey/.

Search engines:

–IaaS / HPC as service / Cloud computing: http://www.hpcinthecloud.com/features/Clouds-Little-Helpers-Companies-to-Watch-in-2011-111850034.html.

Amazon, Google,

V. HPC SECTOR: APPLICATIONS AND USERS.

State / Public Sector users:

A nice survey about state / public sector users from EESI:  http://www.eesi-project.eu/media/download_gallery/EESI_Investigation_on_Existing_HPC__Initiatives_EPSRC_D2%201_FF.pdf

Private sector users:

http://insidehpc.com/2007/03/12/computerworld-article-on-hpc-in-industry/

http://www.thefreelibrary.com/Dell+High+Performance+Cluster+Using+Intel+Xeon+Technology+Powers…-a0212123275

Applications:

From the EESI site: «Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources».

(Note: Links are to wikipedia pages).

VI. DISTRIBUTION CHANNELS: IC AND HPC. 

–Distributors:

Tech Data (http://www.techdata.es/Pages/Start.aspx?corpregionid=40&Culture=es-ES),

Computer 2000,

Ingram Micro.

–Retailers (resellers) and integration partners

VII. TRENDS AND CHALLENGES.

http://managersmagazine.com/index.php/2009/02/los-cazatendencias-de-2009-intel-idc-fortinet-information-architects-strange-corp-gartner-y-nowandnext/

a) Theoretical challenges and trends:

Idealy models of parallel computation and its associated theoretical computational complexity analysis allows researchers to test the efficiency of algorithms independently of a particular system architecture and programming language. A model is a good model if the predicted performance of a given algorithm is not far away (idealy exactly equal to) from the real performance when implemented in a given architecture-language . Relevant metrics and features are the relative metrics speed-up, optimality and absolute  re-usability, portability and scalability. While there are several models for parallel systems, it seems that there is still not a fully satisfactory model-complexity analysis of parallelism that takes into account synchronicity/asynchronicity and homegeneity/heterogeneity and all CPU, Memory (use and access), communication (point to point and collective) and I/O access at the same time. (see for instance http://cstheory.stackexchange.com/questions/4939/what-is-the-right-theoretical-model-to-design-algorithms-for-current-and-upcoming).

For serial algorithms the most used models are the Word Ram model. The new explosion of parallelism (i.e. manycore in industry and the exascale challenge in HPC) demands for a satisfactory model: http://www.umiacs.umd.edu/conferences/tmc2009/. Some serial models that adress some important issues that must to be taken into account in parallelism are the Hierarchical Memory Model and the I/O model  of Aggarwal and Vitter. Some parallel models are: PRAM model (http://en.wikipedia.org/wiki/Parallel_Random_Access_Machine) which neglects communication and synchronization, BSP model (http://en.wikipedia.org/wiki/Bulk_synchronous_parallel and http://fds.oup.com/www.oup.com/pdf/13/9780198529392.pdf) developped in the80-90´s by Valiant, which adresses communication and synchronization, the LogP model (http://en.wikipedia.org/wiki/LogP_machine) of 1993 by Culler et all, the Queued Shared Memory, QSM model of Gibbons which efficiently simulates the BSP model, Network models, that is models that are network dependent, or the the PEM model of Arge et all of 2008.  Now with the Multicore, GPGPU, SoCs,  paradigms new models are arising that can adress its challenges, like the LoPRAM for Multicore, or the CUDA http://www.nvidia.com/object/GPU_Computing.html for GPGPU, and the more programming oriented  XMT-PRAM (http://www.umiacs.umd.edu/users/vishkin/XMT/xmt-intro-6-12-06.pdf) for SoC, Paraleap for Cloud computing and Cilk for multithread parallel…the list is not exhaustive, i suppose.

Some interesting papers about parallel models: file:///C:/Documents%20and%20Settings/ira/Escritorio/patente/supercomputacion/teoria/Lecture3_4%5B1%5Dparallel%20models.ppt#331,28,Pascal triangle. or http://www.cs.rice.edu/~vs3/comp422/lecture-notes/comp422-lec20-s08-v1.pdf and http://parasol.tamu.edu/~amato/Courses/626/references/model-survey.pdf and http://www.cs.virginia.edu/~skadron/cs793_s07/paralleX.pdf and from wikipedia: http://en.wikipedia.org/wiki/Bridging_model and a brand new fromm a HPC expert, Thomas Sterling: http://www.exascale.org/mediawiki/images/a/a8/Sterling_IESP.pdf. However, he has a practical background and has a different idea of  what a model is. An experimental test of how two models can predict the real performance for a given system: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.6329. A brand new thesis (2010) with models description, equivalence and related complexity issues: http://users.uoa.gr/~glentaris/papers/MPLA_thesis_lentaris.pdf and a similar theses also quite recent:  http://cs.ubishops.ca/theses/zhangy200812.pdf. Both, a must. The literature about this subject is quite old as this paper of 1974 from Professor Lipton titled «A comparative study of models of parallel computation» shows. However, this paper seems to deal with models of concurrency and not of parallelism as we understand today (I´ve not free access to the paper, but I´ve seen it in the context of Petri nets).

b) Trends:

–Heterogeneous many-core systems (CPU/GPU),

–NoCs,

–SaaS / Cloud,

–Wireless.

The trends as seeing by IDC:http://www.hpcuserforum.com/EU/downloads/IDC_HPCTop10Predictions2010.pdf.

c) Engineering Challenges in HPC:

The Exascale challenge will surely be a hot topic during 2011.

It is now in the political agenda of science agencies in several countries /economic blocks such as USA, UE, Japan or China.

Scientific communities working on the HPC arena are already working on it, and the idea is dropping down to the civil society (for instance big players in industry). Many decisions has to be taken during this year; some exascale programs, projects or initiatives has already started.

One formula which can summarize the roadmap to exascale computing is: from electronics to photonics and from thousands to millions of nodes. This last rises data movement, power issues and the software stack (system and application) scalability issue.

The best summary about the Exascale Challenge issues i´ve seen so far comes DoE, SciDAC (hat tip to Inside HPC):  http://www.scidacreview.org/1001/html/hardware.html. To summarize: one thing is to have a system able to perform at Peta or Exa scale performance level; this is mainly a hardware and infraestructure software challenge: scalability, fault-tolerance, fast efficient data movement (memory and communication bottlenecks or walls), and power consumption. Another thing is to be able to use this system at 100% of its capabilities; that´s software stack (system and application) challenge: operating system scalability, file system performance, message passing scalability and application readiness to use multi-core for petascale and heterogeneous many-core for exascale.

Another great good source for the HPC trends from the end user point of view is this article: http://www.rdmag.com/Featured-Articles/2011/02/Information-Technology-Computer-Technology-High-Performance-Innovation/. The article comments on a survey they have made. What is HPC used for in R&D ? More used in Development  than fundamental Science: people involved in Simulation, Modelling, Engineering, Data generation and test are the heavy users. Less used in the physical sciences (thermodynamics, physics, material sciences), energy research and chemistry, and much less in biological and environment sciences (general biological research, proteonomics, genomics). And slightly used in climate research and astrophysics. According to users, the main challenges for HPC vendors and ISV are: software optimization for parallel systems (the software wall!) and reduction costs in order HPC can expand in the SMC segment.

Regarding the data movement, we extract the following two paragraphs from above quoted SciDAC document:

«the hundreds of miles of wire inside a supercomputer and the data packets traveling along these wires can be imagined as a giant highway system. And just like a highway system, it can have traffic jams (data congestion), delays (data arrive late), and even crashes (if some component becomes overwhelmed by incoming traffic). The challenge is ensuring the data arrives on time just like you try to get to work on time».

and

«The data movement challenge in the Exascale Roadmap is divided into three categories, each with complementary research activities:

  • On node — node architecture design, new memory management schemes, and improved memory capacity and speed through 3D stacking
  • Between nodes — interconnect design, optical communication, performance, scalable latency, bandwidth, and resilience
  • File System I/O — scalability, performance, and metadata»

Regarding the data movement/memory wall problem, see also White paper about terascale memory from Intel: http://www.intel.com/technology/itj/2009/v13i4/ITJ9.4.7_MemoryChallenges.htm. You can just register and  download.

Regarding power and cooling, from this interesting link, http://www.delltechcenter.com/page/HPC+Power+and+Cooling%3A++Introduction+%E2%80%93+Part+1,: «HPC system design is not just only about number of processors, memory per processor cores, interconnects, or storage capacity and throughput, it’s also about determining the power and cooling aspect of the system to meet the data center requirements, which will result in a more power efficient and reliable system». Part II and III of these series of posts about power: http://www.delltechcenter.com/page/HPC+Power+and+Cooling%3A++Power+Consumption+%E2%80%93+Part+2 and http://www.delltechcenter.com/page/HPC+Power+and+Cooling:++Amps+%E2%80%93+Part+3.

Regarding programming for exascale massively parallel and many-core node-heterogeneous systems:  a roadmap for exascale software from IESP (International Exascale Software Project): http://www.exascale.org/mediawiki/images/2/20/IESP-roadmap.pdf; a white paper from the OFA (OpenFabricAlliance, a collective initiative whose aim is to develop OS and interconnect agnostic software stack): http://www.openfabrics.org/docs/The_Case_for_OFA.PDF; and the student point of view, a blog entry with comments about a Exascale workshop http://software.intel.com/en-us/blogs/2011/01/10/preparing-for-extreme-parallel-enviroments-from-a-students-perspective/.

Millions of nodes will require also innovation in packaging / physical space technologies optimization (mother-boards, racks, blades and…real-state monthly rent !)

Software licencing methods and costs.

Appendix. Other interesting links:

a) A news aggregator for the HPC field i was not aware of. It looks like industry oriented and will probably be one of my daily must, as the Theory of Computing Aggregator, more theory and research oriented. I will include it immediately in the blogroll.

http://www.accre.vanderbilt.edu/rss/HPC/index.html

Hmmm. Enthusiasm about this aggregator is cooling: the aggregator was last updated on March 26, 2010. At least it might provide good links.

b).  http://www.hpcwire.com/blogs/.     Still alive !

c) http://storagemojo.com/about/. Alive. The blog of an entrepreneur in the Computing systems sector. Mainly devoted to storage devices. Market oriented. It includes price lists. Great !

d)http://www.hpcdan.org/. Alive. The blog of a Microsoft VP.

e) http://scalability.org/?page_id=96. Alive. The blog of a HPC entrepreneur.

f)http://interconnects.blogspot.com/. Dead ?. Not updated since 2008.

g) http://blogs.sun.com/HPC/. Dead ? Not updated since january 2010.

h) http://www.clusterconnection.com/2009/06/cluster-or-constellation/. Interesting forum for HPC. Seems to be related with Intel.

i)http://gpgpu.org/2010/12/22/phd-thesis-brodtkorb.

14 respuestas to “A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector.”

  1. Sheldon Says:

    This is a very interesting topic. I’m looking forward to the final version!

  2. ipad apps Says:

    Would you be taken with exchanging hyperlinks?

  3. Dugi's Guide Says:

    I’ve been looking for a brand new template for my WordPress weblog and yours is truly stunning. Any opportunity I could get a link to where you bought or discovered it? If it is a custom, I’d love to understand who your designer is!

  4. Search Engine Optimization Says:

    Hello there, sweet blog (and some amazing posts as well!) Was just interested though, which program did you build it with? Blogengine? WordPress? Dreamweaver? Also, did you find a web designer, or do it yourself? It’d be interesting to know. Thanks!

  5. jersey shore season 2 episode 4 Says:

    Hi! There are some interesting points in time in this article but I don’t know if I see all of them center to heart. There is some validity but I will take hold opinion until I look into it further. Good article , thanks and we want more! Added to FeedBurner as well

  6. access point vs router Says:

    access point vs router…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  7. what is server Says:

    what is server…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  8. building a generator Says:

    building a generator…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  9. most recent technologies Says:

    most recent technologies…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  10. The7Stars Says:

    The7Stars…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  11. James Madison University Says:

    James Madison University…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  12. Noticias tecnologia,revista online,Novedades Says:

    Noticias tecnologia,revista online,Novedades…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 20090319465[…]…

  13. Sudut Pandang Hidup Uzanks Says:

    Sudut Pandang Hidup Uzanks…

    […]A panorama of the global industrial organization of the IT (Information Technology) and HPC (High Performance Computing) sector. « HPC MARKET & US PATENT APPLICATION 12213303[…]…

Terms and conditions: 1. Any commenter of this blog agrees to transfer the copy right of his comments to the blogger. 2. RSS readers and / or aggregators that captures the content of this blog (posts or comments) are forbidden. These actions will be subject to the DMCA notice-and-takedown rules and will be legally pursued by the proprietor of the blog.