In 1995, astronomers in Chile discovered the coldest naturally-occurring spot in the universe – the Boomerang Nebula measured just shy of -458F. That was the record for a while, until a tech startup housed in a warehouse outside Berkley, California used cryogenic refrigeration to achieve a degree colder to run their newly built Quantum computer.[i]
Quantum computers can compute over 1 trillion instructions per second, something that was seemingly unfathomable a few years ago but is now being made accessible in the Cloud to anyone who wants to use it.
Rewind to the dawn of networking advancements in the 1980’s. Businesses saw an abundance of independent clusters of technology that were self-contained - these were often referred to as ‘islands of automation’. The onset of ARPANET (Advanced Research Projects Agency Network) within the US Department of Defense drove the subsequent maturity of the internet protocol that connected these islands. This advancement enabled great progress with information sharing which brings us to the internet today – accessibility of high speed Quantum computing in the Cloud.
The history of computing is crucial to the modern culture of technological and social advancement in our lives as it makes access to information easier than ever before. Thanks to Moore’s Law, edge devices and touch screens in homes, offices, cars and airplanes have become smaller, faster, more affordable and hence, prolific. However, the need to rapidly access and compute accurate information across an industry or large organization continues to be a major challenge, one that transcends technology.
Maturity in social collaboration has exacerbated this challenge by creating remarkable volumes of data. Within corporation, this is often compounded by the proliferation of tools and technology effectively used by small agile teams, tools which have morphed into piecemeal adoption of disconnected platforms at scale. Depending on which part of the world you live in, or the industry you work in, there exists varying degrees of collaboration tools and information silos that make access to accurate information more complex than ever before (i.e. Slack, Teams, Trello, Jira etc.).
This is not dissimilar to the early networking challenges mentioned above. The key difference is that today we’re extremely well connected by networks, and hopelessly disconnected on information, thus creating ‘islands of information’. The global pandemic of 2020 has only made this challenge more pronounced. Addressing this gap is difficult, and resolution requires a combination of modern technology and a deep understanding of human social behavior.
The proliferation of phone cameras and microphones has led audio, video and graphic assets to dominate almost every search criterion, and traditional search mechanisms are unable to offer precise and accurate enough outcomes. In everyday use of the internet, this method is acceptable by most consumers (for now). However, in private enterprise, where digital audio, images and video assets are the equivalent of traded currency, this form of search ranges from significantly inefficient to outright disastrous.
The intersection of large volumes of unstructured data, and the silos it resides in, is a monumental challenge that needs both high speed computing and a well-connected network. These needs are seemingly solved by readily available Quantum computing, high speed networks and Cloud solutions. The element that has always been elusive in this equation is adapting this high technology to human behavior.
Enter Cognitive Search. Based on natural human behavior, and modern Artificial Intelligence methods, access to the most relevant information in the most effective way is now available for our use. While updated search engines will catch up and adapt to new ways of searching for general information, the need within businesses calls for that challenge to be solved with greater urgency. Organizations that crack this code will enable their teams to be far more effective in response to Client and Customer demands. Be it customer acquisition, complex workflows, digital archives or talent search, it will accelerate a culture of innovation and respond to rapidly changing business needs with a new found agility.
Today almost 100% of all searches are driven by a person with access to a little search box or a voice-enabled edge device. Search engines scour multiple sources of information continuously for updated results, making today’s search experience remarkably improved compared to even a few years ago. This method has been a staple of most of us and has matured significantly over time.
Even so, 20 search results per page and the overall results list going many pages deep is inefficient in the context of our ever-increasing volumes of data. The staggering estimate of six billion searches a day on Google alone is a key indicator of how well we humans have adapted to ‘information on demand’. This sheer volume will continue to drive the next behavioral revolution - Conversational AI. The Statista.com projection of eight billion Voice-enabled assistants globally by 2023 is a great indicator of the role Voice will play in the near future.[ii]
Combine conversational AI enabled devices with visual technologies such as advanced image detection, face recognition, color scheme detection, handwritten text detection, video indexing and image categorization, and you have a future state where we have effectively abstracted the concept of ‘islands of information’, and enabled access to information silos using human cognitive functions.
For the layperson, the convergence of modern technologies will only accelerate this change…soon you will speak to your digital voice assistant in your language, dialect, accent or tone with complex instructions. For example, instead of needing to know the title of the show, you can say “Hey Siri, find me that HBO show with dragons and stream it to my living room TV”. Or, for business purposes, “Hey Alexa, find me Lauren’s presentation on project management in Cape Town.” Some of this capability is available today, but within limited data ecosystems. Growth to broader ecosystems will only happen by leveraging nanotechnologies for data storage (but that’s for another day).
Simply put, massive volumes of information combined with converging advanced computing techniques such as Quantum Computing in the Cloud, Advanced Content Intelligence and Cognitive Search would allow for one to hone into contextually relevant visual, sound and text assets with great precision. Moreover, it calls for the search input that is second nature to humans - inputs such as voice, color identification, object similarity & brand recognition will be commonplace. The impact of this convergence will be felt across a broad spectrum from e-commerce, robotics and marketing automation to corporate environments and small businesses.
Research Case
Late last year, a small innovation team at Ogilvy worked with a Silicon Valley startup and Google to see if the technological advances above had managed to catch up to the hype in this area. The team was posed with a unique challenge, a new Client handed off a large volume of digital assets to the account team and needed to attribute the assets to its relevant business verticals.
In the traditional way, one would assume that these assets have metadata tags; that would mean uploading a large volume of data to an infrastructure that was instantly able to search and accurately sort out basic information associated with the assets. The gaps would be filled with a tremendous amount of human effort for a number of weeks. Despite all this effort, there was still is no way to guarantee of accuracy and that assets were consistently tagged.
Utilizing Artificial Content Intelligence, allowed the team to use readily available Cloud based infrastructure to host the assets and utilize on-demand high speed computing to categorize them. Algorithms automatically scanned for the usual metadata but went above and beyond to deep-scan the content of the asset, creating an entirely new and detailed index of information. By further ‘teaching’ the algorithms what to look for, the machine started to understand details like brands, logos and primary color schemes in the images - not just the metadata tag typically associated with it. They were also capable of doing this at scale and at tremendous speed.
The Outcome
The Cognitive Search methods of access considered for the research ranged from simple speech, audio recognition, search by color, image likeness, face recognition and brand logo recognition. What this meant was that the output from the Content Intelligence index matched a true replica of the content and cognitive functions like voice or visual object matching could be applied. For example, a human might have watched a clip of a car race and tagged it with the word ‘race car’, as well as the location of the race as ‘Nürburg, Germany’. On the other hand, the algorithm tagged the clip with the color and make of the car, the brand logos along the race track, elemental factors such as the weather etc. Training an algorithm to be able to do this takes effort, but once done it takes on a life of its own.
Limitations
The team found that some limitations do exist. The interfaces to these comprehensive data sets means that the Search box is no longer a viable input, and cognitive input methods need to be designed and developed with UI/UX teams. Further, the technology does exist but is far from being democratized. Achieving success requires an investment in advanced skills such as data scientists, AI and machine learning engineers that can be hard to find.
Conclusion
The convergence of Cognitive Search combined with Artificial Content Intelligence is an entirely different technological dimension that will benefit society in general but will enable efficiencies and better outcomes for all industries.
Being digital is about people, process and technology. In today’s transformational landscape, the implications of advanced technology are more significant than ever before, as long as we take into consideration how this rising tide lifts all boats.
[i] Adapted from ‘The Future is Faster than You Think’ by Peter H. Diamandis and Steven Kotler
[ii] https://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-in-use/