Previous month:
April 2019
Next month:
June 2020

MapR's failure represents nothing.

The technology business has much in common with the fashion industry. Both follow fads and have a gaggle of posers who flitter from seasonal event to seasonal event. They're hype driven businesses and just like failed attempts at creating fashion trends the idea that the failure of a batch of companies in any segment of tech is a warning for all of tech is incorrect. Far from being an industry wide cautionary tale the write off of the $280M invested into the now defunct startup, MapR, represents nothing.

In 2016 MapR was valued at $1B. Today it is worth nothing or close to nothing. The public cloud providers forced MapR's largest rivals to consolidate in order to cut costs and gain scale. It is incredibly difficult to compete against hyper-scale companies with near infinite resources so Hortonworks and Cloudera decided it was better to travel together than to travel alone.

MapR, with no allies and no path forward, succumbed to the inevitable when its funding was cut. I've seen arguments made online that the Big Data segment was nothing but hype and this is a warning about hype powered investments into other tech segments. Yes, there was a lot of hype but there is always a lot of hype.

Big data, analytics and AI workload deployments are increasing but they are doing so while not being enabled by MapR. This was MapR's problem. Technology is like the fashion industry in the sense it has seasons, fads and assorted posers who jump from one fad to another each season. Following the hype, finding the fads, ignoring the posers and trying to pick a big winner from amongst many contenders has been the tech investment strategy for decades.

Y Combinator, the famed startup accelerator, has a vigorous vetting process and has invested in more than 700 startups. If you want to see the future you start with Y Combinator. Out of those 700+ startup investments the number of breakthrough companies Y Combinator has invested in that have hit it big? Three.

The majority of investments failed, some are profitable and making money, but the ludicrously successful investments are in the low single digits. It's true for Y Combinator and it's true for the big VC firms.

Go from Y Combinator to the heavy hitters of VC finance and you find VC firms doing their best to identify as many unique companies as they can in a fashionable segment of tech. The hope is that one of them will be the massive success, the next market dominating force in its category. Everything else they invest in will be seen as garbage even if it's a nice little business. No VC wants a piece of a little business. If you can get to profitability without VC money you can be as little a business as you want.

VCs invest in companies they think have a shot at massive returns. When it becomes clear there is no shot they stop investing and move on. MapR's failure is the realisation that against the public cloud providers, and the scaled up Cloudera, MapR had no shot.

There is always a new season and new fad to invest in. With diligence and a bit of luck one of those investments will hit it big and we can then start complaining about how overvalued it is or how high its prices are.


The revenge of the original RISC processor.

Two categories of microprocessor have dominated computing over the past 18 years, performance processors and low power processors. These are now being joined by a third category of microprocessors, minimal cost processors. This emergence of minimal cost processors, led by RISC-V and MIPS Open, will replenish the dwindling supply of system design engineers.

The dominance of the Intel architecture eliminated a generation or two of system design engineers. Years ago there was a cornucopia of processor architectures used in different computing segments, then Intel’s critical mass made standardisation viable across a number of different markets.This move to x86 everywhere made economic sense for many companies but the result was a homogenisation of design.

Where designs once were expensive and bespoke processors and boards became a commodity. Intel used economies of scale to extract significant gross profit from those commodities while their fabrication prowess ensured competitors could never execute fast enough to challenge them. When Intel stumbled a competitor rose, when Intel returned to superior execution the competitor fell.

As a host of processor competitors withered and died there was neither space nor need for people working on things that were substantially different to x86. Very quietly the PC and Server wars ended. Across desktops, in appliances and in data centres "Pax Intel" reigned.

For a long time AMD and IBM POWER were the only other two providers left standing among a pile of defunct designs and their market share was minimal. It took a new class of device, the smartphone, for ARM to propagate across the world. If you are looking for performance Intel, AMD (And POWER) occupy that category, but a low power category was an opportunity for someone else.

ARM, today's low power champion, is now under threat from a new category that of the "minimal cost" category. Two designs from the distant past have returned and unhappy ARM licensees are interested in what they have to offer.

The first minimal cost design is RISC-V, the fifth version of the original Reduced Instruction Set Computer processor designed at UC Berkeley. The second is MIPS Open, the successor of the Stanford University spin-out processor that powered Silicon Graphics systems and the Nintendo 64.

These two minimal cost processors offer current ARM licensees the choice between hiring their own engineers to create new designs using an open license or not doing any of that and taking a license for immutable core designs from ARM. Increasingly firms are now looking at the cost of licensing from ARM and are instead putting their own design teams together. System design jobs have entered a new era of expansion with companies looking at doing their own bespoke implementations again.

RISC-V has a simple base specification of about 50 or so instructions, in contrast the POWER Instruction Set Architecture I used to keep in a desk drawer runs to three bound books and over 1200 pages. RISC-V’s academic roots are plain to see as there is just enough to get going.

If you want to own a trinket you could drop $60 on a HiFive1 board with a genuine RISC-V processor but you can simulate RISC-V in QEMU today and boot Linux on it. CPU simulation performance is acceptable enough to get work done and QEMU also supports MIPS CPU simulation.

While RISC-V may run Linux it does not have the capability to move into the performance category. David Patterson, one of the original co-designers of RISC (And RAID) co-authored an interesting paper spelling out an ambition to become the ubiquitous processing core anywhere computing is done. A lofty goal but like Linux it will take billions of dollars in investment from a broad partner ecosystem to move towards performance and take on the established providers there.

The ARM server CPU companies have faded because moving from low power to high performance is a new set of brutal challenges. One of those challenges would be in meeting Intel and AMD on ground they know well and outspending them when it was required. When it mattered Linux had friends with deep pockets, ARM server processor providers do not.

Unlike RISC-V MIPS descended from high performance into low power and minimal cost through a number of marketplace defeats at the hands of others. MIPS was considered to be so strategic in the 90s that it was the second target platform for Windows NT 1.0, but it is now more commonly found in embedded controllers.

With poor leadership and a hostile market MIPS has had a rough two decades but continues to be used for embedded applications and appliances. Facing a surge in RISC-V interest team MIPS had no choice but to open up their intellectual property for fear they would be the first casualty of RISC-V. This was an move showing intelligence that has been lacking from previous MIPS leadership for years so there might be room for MIPS Open in the minimal cost processor segment yet.

The question that matters is if it's possible for minimal cost processors to jump categories and take on the performance CPU providers? Yes it is possible. But they're going to need a lot of friends who have a lot of money and those friends will really have to want them to succeed.

Same as it was back in the 80s when the original RISC processor designs came out of the Berkeley CS labs and dominated the UNIX business for years after.


Quantum uncertainty

The promise of quantum computing is its ability to derive answers for problems which are currently computationally prohibitive. Of the many open issues surrounding the successful delivery of mass-market quantum computers two of note are the vagueness of what they will be useful for and if quantum computers can be scaled in a manner to solve hard problems? It is uncertain as to if there are answers to these questions.

By modelling the natural world in a processor, as opposed to digital operations in current processors, the assumption is we will have the ability to quickly simulate the interaction of particles in the real world. Our current computing model built as it is on zero and one’s is inadequate for these use cases as in the natural world things are not binary. Traditional computers are built on the bit, it has a state of 0 or 1 and never anything else. The qubit, the building block of quantum computing, is capable of operating in intermediate states. Imagine it as always operating from 0 to 1, and qubits only deliver an output of 0 or 1 when you take a measurement of their current state.

To give an inaccurate accounting of quantum computing that I'll tell you right now is incorrect but does not require an understanding of quantum mechanics or linear algebra, imagine that when simulating something using bits that you sequentially cycle through every outcome until you deliver an answer. For some workloads this can be done quickly, for others, usually involving an examination of the building blocks of reality, the computational time required can be measured in thousands of years. With qubits you can check multiple outcomes in parallel because what you’re using for your computation more accurately reflects the phenomena you are looking to examine. When you take a measurement, you have an answer and the answer is not derived from a binary simulation of reality painfully stepped through one option at a time but from the effect of computation on reality as it exists.

This isn’t to say that quantum computing will solve all problems, it is not administration rights to the knowledge of the universe, nor might it solve problems faster than traditional computing. It is expected quantum computing will have applications in physics and mathematical factorisation (for example cryptography and breaking cryptography), but there is still a realm of hard problems expected to be well beyond the capability of quantum computing.

To date we are unsure as to what quantum computers will be useful for as the hardware is experimental, small scale and provides results of questionable accuracy. If chip designers can crack the tough problems around the development of quantum processors and qubits the end goal will be discreet quantum processor unit cards (QPUs) available as accelerator cards the way graphics processors are delivered today. Today however quantum computers are big, qubits requiring isolation as to not negatively interact with one another, and require cryogenic cooling to ensure stability.

Right now Intel, IBM and Google have fabricated double-digit qubit chips but Intel admit these are probably not good enough for operation at scale as these qubits have a high error rate. The fact the hardware returns an unacceptable number of incorrect answers to be useful for computation has not slowed down the search for new quantum accelerated algorithms. With a lack of production grade hardware, software developers have turned to simulating qubits on traditional computers. Microsoft has released a preview of their Q# programming language which comes packaged with their quantum processor simulator and there are extensions for Java and Python which do the same thing.

As qubits in the real world may not be performing as expected how accurate software simulations running on traditional computers will turn out to be is also a question yet to be answered. There may be a discovery or two yet to be found not reflected in the software and when the hardware and software are delivered the systems may just fail to live up to their promises.

The quantum computing breakthrough has been five years away since the first time you heard the phrase “quantum computing” and its success is still not inevitable. While the technology industry is like the fashion industry in the sense there is hype, trends and seasons when it comes to new offerings it would be unwise to be cynical about a new technology in its formative state. That said, controlling expectations would be prudent until you can rent millions of qubits from your favourite cloud computing provider or add a QPU to your desktop.

Just be sure to keep a can of liquid nitrogen close to hand if you buy your own QPU.