When I graduated as a microelectronics designer in the early eighties, I never thought we would experience the limitation of Moore’s Law in my lifetime. The prediction made by Intel’s co-founder Gordon Moore that the number of transistors on a microchip would double about every 18 months has hitherto been consistent. Computers have shrunk significantly to those smart devices that we carry and interact with every day.
In 2006, Ray Kurzweil the futurologist published “The Singularity Is Near” – a year before Apple launched its iPhone – in which he advocated a world where humans and computers would essentially fuse, delivering new capabilities. Many thought that this notion was science fiction. Kurzweil pointed out that as technology accelerates at an exponential rate, progress would eventually become almost instantaneous – a singularity. He also predicted advancement in technologies in such areas as genomics, nanotechnology, automation and robotics. The convergence and interoperability of sciences would be the norm with electronic and biological systems able to interface and exchange information automatically.
Today, we are heading towards reaching the theoretical limit of Moore’s law. The transistor size cannot be shrunk any further on a silicon wafer due to the instability caused by the quantum effect at the atom level, which results in the transistor malfunctioning.
However, numerous ways that speed up performance such as quantum computing (making direct use of quantum-mechanical phenomena, such as superposition and entanglement), neuromorphic chips (microprocessors configured more like brains) and 3D stacking (three-dimensional integrated circuits) continue to accelerate progress and define new applications in many sectors for decades to come.
The convergence of technology and human intelligence is real and driving innovation in every sector. Through genomics, our complex biological structures are translated as code, and we have already started to edit that gene code in the prevention of crop disease and to improve crop yield. Advances in techniques such as “Machine Learning” – computers having the ability to learn without being explicitly programmed – are pushing boundaries on what is possible, opening up a new era of innovation. Today, more and more robots are undertaking non-algorithmic jobs, freeing time up for humans to focus on areas of innovation that deliver greater social and economic benefit. Machine learning is ubiquitous, take autonomous vehicles, the online recommended offers from the likes of Amazon and Netflix, analysis and interpretation of social media trends and fraud detection through complex patterning interrogation. These examples demonstrate how Artificial Intelligence algorithms are being employed in our lives every day.
As digitization of the physical world continues to explode, human behaviour becomes more sensitised to machine cognition, attuned to the new ways of doing thing, and ultimately, entrust and value the machine’s decisions. Technological interventions are changing us physically and mentally. Reliance on technology is growing and remoulding society. For example, the power of AI and big data is being harnessed to generate stunningly accurate behavioural predictions that are shaping and therefore manipulating human behaviour. Unprecedented access to instantaneous information at our finger tips has changed our ability to want to remember facts, as we know everything is just one click away.
A trump card that gives humans their advantage is the notion of empathy and creativity – the capacity for generating an original thought is what gives us the advantage over machines – at least for the moment. Objective and subjective reasoning, is where human capability stands out and technology still lags.
Recognising this ever shifting paradigm, organisations will have to address the imponderable challenge of striking the right balance between human and machine decision making, and thus, in the organisation’s design. This human-machine interplay continues to fuel further innovations that challenge the conventional wisdom of our existing organisational systems and processes, and often renders them obsolete.
In my previous brite Innovation Review article ‘Get Ready for the Digital Transformation’, December 2015, I have highlighted how organisations could dynamically reconfigure their business logic to create a business advantage from combining two technology drivers (real time, big data, cloud computing, Internet of Things).
It is apparent that despite us almost reaching the limitation of Moore’s law from a hardware perspective, the connected nature of human and machine will take us to a new level of capability. Future organisations will become ‘hybrids’ of interlaced networks of knowledge able to reconfigure rapidly and adapt their ecosystems to create value.
On this new trajectory, organisations will need to invest wisely. A set of new guiding principles will thus, be required to configure organisational structures that optimise the value of human to machine networks, and inform strategic options on where to innovate.
Developing a single vision for human-machine innovation is the initial step. Three key cyclical stages will need to be applied simultaneously to both the human and machine domains. These are: Acquisition of knowledge (where data is analysed), Diffusion (where learning happens), Action (where value could be generated and monitored with an organisation’s ecosystem).
Ultimately, value creation is the de facto aim that joins the hybrid pieces of man-machine innovation, providing for new opportunities to support prosperity and drive growth.
Hybrid organisations require a fusion of interdisciplinary skills and capabilities. Data science becomes a ubiquitous skill that people possess - able to work with data, which is often unstructured and in very large scales, whilst at the same time explore it for different needs and opportunities.
Check out our Brite Innovation Review for more articles