Future of Enterprise Search
A lot has been written about how GPT will (or already has) revolutionize(d) Search, and maybe even the ‘world’ at large, as we can see the various applications of these advanced models in schools and corporate life alike.
However, I work in the Enterprise Search space, which is quite different than Consumer Search at multiple levels (some key differences below).
- Usage frequency & Interaction — Enterprise Search has a comparatively small audience (in a specific tenant) than consumer search, since it’s bound within an enterprise / organization. Hence the usage frequency/engagement level of enterprise search is at a lesser magnitude of consumer search (e.g. Google search), which in turn creates lengthier and less frequent feedback loops for enterprise search to learn from and adapt quickly to.
- Number of webpages & content involved — Consumer Search has billions of webpages to sift through as compared to search scoped to an enterprise, but therein lies the advantage as well. Almost every insight/answer repeats itself more than once across 12B pages of data (Google) on the internet. This repetition reinforces the search relevance, making the search results more accurate and relevant. Compare this to an enterprise, where pages might number from 10 to 10000, and specific answers might appear only once — as a sentence, paragraph or a phrase. And furthermore, this specific answer might have been called for by only a few users as compared to millions and tens of thousands of users in a consumer search setting. Hence it becomes challenging to parse out the most relevant insight when the model hasn’t been reinforced with repetition in the enterprise search scenario.
GPT and other advanced generative models can help with user search intent identification, translation and even content creation, summarization (conversations, emails, web pages etc.) and augmentation to help boost employee collaboration and productivity in the enterprise. However, to accurately understand how Enterprise Search will have a tranformative shift in the workplace, let’s start with where we began..
EVOLUTION OF ENTERPRISE SEARCH
In the Beginning..
We had keyword search that ignored syntax (combination of words & phrases to create meaning and context) and semantics (determine the intent and contextual meaning behind a search query), though it was still a good attempt to scour through the enterprise content comprising of documents, files and the metadata associated with them.
Then came Personalized & Contextual search, wherein the context of user’s query is identified based on their search history, behavior, device, location and other attributes of the user or the cohort they belong to.
Semantic, Natural Language and Assistive Search have more or less transpired in the previous 5 years, with ML models being leveraged to process and understand natural language queries, identify sentiment of user query (based on trained ML model) and to analyze structure, relationships and meaning of the query terms to understand user’s intent. In conjunction with this, efforts have been underway to bring more “assistants” or “task automation” in the midst to help users accomplish workflows automatically, without too much of a manual intervention.
2023 is already buzzing with conversation interfaces (chatGPT type) where the models are more cognitive — breaking down the search query into key entities, thereby accurately identifying the user intent and then extracting relevant personalized information from multiple, diverse data sets. We’re already seeing search efforts aiming for pro-active recommendations, tightened with relevant content summarization, augmentation and generation.
Five years down the line, we might even see neural laced search — human brain merged with computers (case in point — Neuralink) where your thoughts are transmitted to a computer to produce the most perfect answer that continues to evolve, as the questions in your mind become more thoughtful.
Major challenges today in Enterprise Search
To understand what the future holds for Enterprise Search, let’s first briefly identify the challenges that this ecosystem faces today in an organization —
- Search Completeness — In most organizations, the search index is only 40% to 60% complete, and this gap only widens with the introduction of new document types and sources— figma, powerbi, wikis & other new connections. LLMs can perform large scale content mining and help provide complete answers from a wide range of sources and entity types.
- Search Relevance — Users continue to express DSAT (dissatisfaction) and face failures to find accurate answers, people and files in search. Enterprise Search can be made more relevant by providing direct and semantic powered answers in search.
- Assistive Search — Today users do the hard part to go through multiple results to distill information to find what they need. GPT based search will help extract information and summarize results for users, saving time for users to accomplish their workflows.
- Persona based Search — Today, Enterprise Search is not intelligent enough to understand the various personas/roles of employees in an organization, thereby resulting in high failure rates for people/persona-based searches. LLM and GPT based search can identify, process and distill relevant information from an employee’s role, work profile and other specific information making results highly personalized.
- Task completion in Search — Today employees have to search & take actions manually, to get things done (accomplish tasks). GPT based search can directly help accomplish tasks for users, such as finding the right people with certain expertise and starting a chat, booking a meeting, schedule trainings, summarizing and augmenting content etc.
On the user side, in an enterprise search setting, employees have three main intents in their workday, for which we need to uplevel our search game —
- Find — The user is in the midst of a workflow and wants to find some insight/answer quickly so that they can use it in a document/ file / presentation/ email / conversation. The main jon here that the user is trying to accomplish is completing the task at hand (document write-up or email composition or making a decision during an online conversation etc.), however they first need to “find” the specific answer in their workflow in order to complete their original task.
- Re-Find — Similar to the above, the user is trying to re-find an answer (or email or person or file/document) that they’ve accessed before, and need to do so again — quickly, since the user knows the answers exists somewhere in the depths of their enterprise content (e.g. emails that they know they’ve sent or received before).
- Research — Last but not least, atleast 30–40% of the workday is spent searching for information around a project, business proposal, product pitch or other requirements. “Research” is a vague term, but it encompasses all the workflows users conduct for a substantial period of time over the course of a day, week or even month/s for a project of some sort. These research type use-cases take up a lot of time & energy to sift through multiple sources of data, across a variety of data types and formats, and distilling the information down to relevant bits that you can use in a document or a presentation.
User Expectations are evolving
Would I even “search” in the future?
Based on today’s user intents, employee (user) behaviors and expectations are also rapidly fine-tuning themselves based on the new tools, gadgets that users are exposed, and the derived attention capacity afforded to by these experiences. Search expectations are evolving based on — What, How, When and Where users ask their queries.
WHAT I ask — There’s a definite mindset shift from “searching” to “accomplishing a task”, since more than 80% of an employee’s workday is about accomplishing tasks (writing an email, summarizing the minutes of a meeting, taking notes and following up with people, writing a business document, so on and so forth). Hence, users are not looking for a mere answer to a question; they’re expecting to complete a job/task in a frictionless manner.
On a similar note, the content and context that users are searching for may come through from a wide variety of sources & data types — audio, video, images, text etc., which will allow for the rendering of deeper insights to any kind of query.
HOW I ask — The way users ask their questions or for assistance is also undergoing shift from text-based to voice-enabled — similar to consumer search where voice plays a significant role in different contexts such as voice-assistants in cars, at home and more so on their personal mobile devices.
As more and more employees have their mobile devices as their secondary tool for workday flows, interoperability between mobile and PC needs to be seamless for a frictionless search experience.
WHEN I ask — As mentioned before, employees are more likely to want to accomplish a task rather than search for a specific answer, hence timely pro-active recommendations and pre-done tasks need to be tightly integrated into the search results, without waiting for a user to switch context to complete the workflow somewhere else.
Extending this expectation further, users might expect to get their needs forecasted ahead of time and have their workflows/tasks automatically accomplished based on a particular time of day, or day of the week/month, based on previous user behavior.
WHERE I ask — Users today expect and want their queries answered or task accomplished on the canvass/platform/tool they’re on currently — e.g., Slack, Teams, Gmail, Outlook, Internal Wiki site etc., without having to switch context to another app/platform to help get that specific insight.
Search is also expected to be more conversational rather than uni-directional, as has been popularized by chatGPT. Hence a universal chat bot across multiple/all platforms in an organization, that will parse the user query, understand context and provide the most relevant insight might become the universally accepted standard.
Taking Enterprise Search to the next level
Now let’s see what the future of Enterprise Search holds —
#1 Search needs to break
Users have workflow memories i.e., they continue to search today the way they’re taught or made to — sometimes due to lack of better alternatives, and at other times users are just habituated to the friction. There are traditional workflows users adhere to — e.g., whilst deep into the process of creating a business proposal on PowerPoint, the user needs a relevant piece of insight/content, then they might go to the web/Corp-net or to some other canvass to access that content and switch back to insert it into the ppt. There are ways to accomplish this workflow from within PowerPoint (the search bar at the top that lets you search for workplace/enterprise content) but this swiftness is only comprehended by power-users or users who know that this functionality exists.
We expect users (and rightfully so) to go search and find a piece of insight, to stick to their traditional workflows and solve to remove friction from that existing process. That is, we expect users to come to search. But why can’t search come to them? Why don’t we break search — the way it’s historically done — to then rebuild it the right way from the ground up. LLMs and advanced AI might just inspire us to do that.
#2 Understanding User Intent
Traditionally, the user has rephrased queries until they find what they’re looking for, especially for research kind of user jobs. Understanding user intent has been a slow and frustrating journey (& is still evolving), thus these advanced AI models are critical for understanding semantics in a search query. LLMs are great with unstructured data and breaking a query down to help extract the user intent, and then compiling plus summarizing the insights based on the best match across a wide variety of data sources.
Real world search tasks are messy and complicated, and LLMs (also MUMs — multitask unified models) might help circumnavigating these messy human processes.
#3 Frontend UX vs. Under-the-Hood operations
As an employee, you go through multiple scenarios in a day — finding people - who’ve just spoken to, who you’ve come across in a meeting a week ago — to finding a specific file or piece of content — to conducting a full blown research on a particular topic and many other cases. In an ideal scenario, with advanced generative AI, users should be in control of their own search experience in the front end (what job they want to accomplish at a given time), and all other specifics of serving the relevant information at the right time should be conducted behind the scenes (under the hood), wherein the experience seems seamless (almost flawless) to the user.
The backend service should be able to figure out when the user needs to find something quickly (“search” on whichever canvass you’re at) vs. when they’re in a full-research mode and want accurate veritable answers to a multitude of questions for the same context (through a “chatbot” interface).
#4 Platform Agnostic
The relevant personalized insight should be presented to the user wherever they’re, on whichever canvass they’re at in the workflow at a given time of the day. This entails creating a decentralized search experience which is platform-agnostic i.e., a service that can be leveraged across endpoints, canvasses and platforms to create coherency across platforms. Obviously, specific LOB (line of business) type scenarios might require customized versions of enterprise search, but the components on which these customized versions operate should be standardized across the enterprise, to create a coherent search experience for users, on whichever canvass or workflow or context they’re in at the moment.
#3 Creative ways to solve Hallucination rates
One of the weaknesses of these advanced AI systems is that such bots often seemed to “sociopathically” and pointlessly embed plausible-sounding random falsehoods within its generated content. This is called AI Hallucination. Subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.
“A hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen.” — Greg Kostello, CTO and Co-Founder of AI-based healthcare company Huma.AI
This can create an unwholesome and even troubling experience when you quote this AI-generated insight in an official document or embed it in your research paper, only to have it refuted later on invalid grounds. Or in other cases when these advanced AI recommendations prove un-trustworthy during a search workflow.
However, there are creative ways solve for this problem as our AI systems take the time to learn and absorb contextual information. Bing Chat quietly released three distinct personalities i.e. tones — creative, balanced, precise — wherein the preciseness of the answer/insight is under user’s control. This is just one of the ways to bring AI hallucination rates down, and as we see more real-world applications of these generative AI models, suffice to say that AI Hallucination might just become a thing of the past.
#4 Admin controls are next bet
Usually Search Admins exist to monitor search health in an organization, identify search failure patterns, relevance tuning, add bookmark answers, acronyms and new content sources (connectors) to the enterprise search ecosystem. With generative AI models, we’d be able to deploy AI to reduce operational overhead, resulting in a better admin experience. Automated labelling of key content, adding new answers and content sources, tuning search relevance based on employee usage patterns in their organization and other tasks can be conducted either through pro-active recommendations (assistive administration) or completely automated workflows (set it and forget).
Thus, every organization with unique set of requirements can get a personalized enterprise search experience without expending too much of admin budget/time, thus improving admin efficiency, allowing them to focus on other high-value tasks.
Complementary to AI Hallucination, grounding the answers/ insights/ results on sources of data (references) is necessary to provide verifiable facts to the user. Hence, we still need these full-fledged websites/portals / full length documents etc. where the original data/content can live.
Line of Business (LOB) portals/wikis (such HR, Tech Support, Benefits, Facilities, Legal etc.) would still need to exist to provide full fledged content sources (or sources of truth to put it another way) for advanced generative AI models to base their insights/crisp-summaries on.
#6 Discoverability of usecase-specific bots
Many organizations already have usecase-specific (or LOB - line of business) chatbots e.g. customer support bot, tech support bot, HR bot, employee benefits etc. In addition to augmenting the capabilities (intent identification, information accuracy and relevance) of chatbots, advanced AI models might also help with discoverability of these existing bots. Search will play a key scenario here to help users discover and access existing capabilities (bots etc.) faster to help dive into use-case specific insights (eg. personal payroll or immigration related queries), thus improving employee productivity gains in the workplace.
Measuring Search Success
This will take another article altogether, but overall, Enterprise Search with GPT capabilities can help to -
- Improve Search Success (i.e., reduce search failures)
- Reduce Re-query rate (i.e. user trying to find the correct answer with different versions of the same query)
- Improve Time Savings (productivity)
- Increase NSAT (net user satisfaction)
Overall, I’m excited about the possibilities of workplace search completely overhauling itself to become major employee productivity engines (in conjunction with the announcement of Microsoft 365 Copilot — not a plug!). The mundane task of finding the right piece of content at the right time, at the right place is definitely an interesting problem-space to be in right now.