AI and the Future of Work: supplementing, not substituting
We’ve long been fascinated by the big idea behind workflow automation – there’s something uniquely exciting about eliminating or augmenting a routine process to unlock human creativity and potential. And that’s key for us. From Hubtran to Linksquares to LogicGate, we’ve always been drawn to companies automating “the critically mundane,” rote tasks people prefer not to do so that they can focus on more creative concerns.
Historically, we’ve taken a highly targeted approach to workflow automation. Our core belief is simple: not everything should be automated. Workflow automation technology should rarely cut out the human component entirely. Instead, human input should be recognized and complemented. The goal, after all, isn’t to render humans obsolete but to extend their capabilities. In other words, the technology should serve as a co-pilot, not an auto-pilot.
Exploring machine learning and artificial intelligence (ML/AI) technology and its role in the enterprise was a logical extension for us. As the next generation of workflow automation, AI is advancing the simple abstraction of discrete, mundane processes to power novel capabilities that literally move humans beyond limits like time and resources.
Generative AI is a great example and has gotten a fair amount of buzz in the tech world as of late. Companies like Jasper.ai and technologies like ChatGPT are paving the way for AI-generated content to become a common practice, but they aren’t entirely stripping out the role humans play in the creative process.
In fact, AI itself tends to agree. Jessica Mathews, the writer for VC-newsletter Term Sheet, recently tested out the very question of AI replacing humans in the context of journalism by having ChatGPT write an introductory column – and the output was as amusing as it was ironic: “Many newsrooms now use A.I. to write simple stories…But as a journalist, I also know that there’s something special about the human touch. We bring our own perspectives and experiences to the table, and that can give stories a depth and nuance that an algorithm just can’t match…I believe that AI will continue to play an increasingly important role in journalism, but it will never entirely replace human journalists. And I believe Term Sheet will always be better off with a human touch.”
I couldn’t agree more. AI and the types of tools, like human-augmented workflow automation, we see pop up every day are enablers to inimitable human creativity and innovation, allowing people to develop and roll out new ideas at speeds that were once impossible. As someone who spends a lot of time thinking about the future of work, this premise is incredibly exciting.
Of course, none of this is entirely new. The power and promise of intelligent algorithms have been a part of the conversation for a long time, and many large organizations with flexible budgets have already incorporated AI. Still, its adoption isn’t widespread – yet. We believe there’s still a vast amount of whitespace and runway left across and within certain use-cases, where the integration of AI technologies would not only disrupt processes but substantially augment decision-making capabilities.
The Macro Backdrop
There are a few reasons why now is an especially interesting time for the commercialization and adoption of AI technology, especially by the organizations and functions that have historically lacked the massive R&D budgets and resources to be early adopters in the space.
The most acute reason can be found in the first chapter of any economics textbook: the law of supply and demand. While shadowed by the daily news of tech layoffs, the fact is that the US is hovering at near-record-low unemployment rates. This means that the number of open jobs (demand) far exceeds the number of unemployed people in the market (supply) – at a historic ratio of ~1.7x. The problem grows especially urgent for highly skilled technical workers. In 2022, there were approximately five job openings for every one software developer and about twenty for every one data scientist.
Meanwhile, the demand for data continues to increase: 52% of companies accelerated
their AI adoption plans during the pandemic. 67% of companies now discuss data
more than they did five years ago. Data engineering is one of the fastest-growing domains in technology, with over 88% growth in job openings last year. Data, and its strategic use, are no longer a “nice to have”—it’s crucial to any company that wants to understand the underlying trends driving their business and market. Unfortunately, the consequence of a dearth of people capable of understanding that data means that most data today is not effectively leveraged.
Secondly, the conditions that have the power to enable (or hinder) the progress of AI have never been so advanced:
- Computing resources required to support large AI implementations are becoming less expensive and more easily scalable. The powerful computing capabilities of tech giants like Google are now more accessible to smaller companies.
- The role of Chief Data and Analytics Officer is on the rise, a trend signaling the increased priority data holds within enterprises. The number of organizations with CDOs increased from 12% in 2012 to 73.7% in 2022.
- The open-source movement has dramatically accelerated over the past few years. Thousands of pre-trained models are available online for free, meaningfully expanding the capacity of average data scientists and leaving more room for innovation in the space. In fact, ~48% of enterprises already leverage open-source technology for their ML/AI needs.
Where are we focusing our efforts?
Given the conditions above, we’re very excited – specifically about the potential of verticalized, or “purpose-built,” AI. Most often used in the context of markets, for our purposes, “vertical” refers to solutions that are productizing AI for specific use cases, whether functional or industry-oriented, as opposed to horizontal applications.
Vertical AI solutions are defined as owning and amplifying an entire workflow, end-to-end, for a specific type of user or desired output, usually in a specific industry. Often pre-packaged into user-friendly workflows, these solutions allow users to extract pointed insights on the front-end leveraging data that is automatically configured and processed on the backend. This is meaningful for two reasons, both of which come down to fewer resourcing requirements.
- First, by being an out-of-the-box, end-to-end solution, vertical applications often don’t require a specialized AI-specific tech stack to support their functionality, which is a good thing given the cost and integration lift the necessary infrastructure and tooling can demand of homegrown or horizontal solutions – including everything from data collection to model training to monitoring and observability.
- Second, by being tailored to a very specific set of utilities, the technical components of vertical AI solutions are pre-built in many ways, not requiring highly technical data science knowledge to operate. Vertical solutions instead focus on configurability and flexibility, most benefitting from use by domain experts who know how and what boundaries to configure to garner insights – without actually having to touch much data.
An additional advantage of this specificity of vertical solutions is that time to value is quicker. Each function and use case are unique in the type of data it relies on and the structure in which that data can be leveraged. Horizontal solutions need to retrain themselves each time a new use case is added. Because of this, many early players already have a retention issue. Getting quick, insightful output from these solutions requires more time and effort to train on the inputs. But, prioritizing this effort in a time-strapped environment is a challenge without knowing the value that will be generated.
It’s the classic chicken or the egg conundrum and ultimately has resulted in churn for many. Conversely, vertical solutions have pre-defined guidelines tied to their specific use cases, often requiring far fewer iterations to generate value.
To sum it up, we recognize that there are certainly organizations that would prefer to, and more importantly, can build a robust AI capability from the ground up. The reality, however, is that for most enterprises, this investment isn’t feasible without a clear ROI case, especially in the tight budgetary macro we’ve entered. That’s why we’re interested in innovations enabling this part of the market: solutions that have a low barrier to entry and can democratize AI across use cases to truly become a deeply embedded and, eventually, commoditized way of driving work forward.
What ‘verticals’ are we excited about?
Saying “AI/ML” is like saying “automation” – it’s a hugely broad term that can, will, and in some cases – already has been – applied to virtually any function in any industry. But heading into this year, we know company leaders will be hyper-focused on investing in technologies with a clear, needle-moving value to be created. We think some of those solutions will enable the following:
Modernization of the CFO Suite. The finance function is still heavily bogged down by legacy processes and procedures. Layer that on top of an environment demanding resource efficiency, and CFOs will need to embed more powerful workflow solutions to do their jobs. I like this simple example an Accounting Professor wrote about on LinkedIn, highlighting the power AI can deliver in this suite.
Of course, that’s just one
application of a specific AI technology, but it emphasizes why we think there’s so much room for disruption here. Finance demands are scaling faster than finance teams. This pain seems to be especially acute in the book-closing process.
In a time where financial efficiency and visibility are crucial, finance leaders are under increased pressure to deliver faster. In fact, a 2022 survey
of finance leaders showcased that an overwhelming 93% are under pressure to close the books faster, despite having legacy solutions and resources. However, despite this increased burden, there is no room for skimping on accuracy. Conversely, the ongoing impacts of immense overspending in tech combined with a recessionary economic setting are increasing the scrutiny on cash management, revenue visibility, and financial statements.
Considering all the above, we believe AI-powered solutions can be leveraged to improve everything from data collection to invoice management to risk-adjusted reconciliation.
Acceleration of Development & Design. The demand for front- and back-end developers has only exponentially increased and will continue for the foreseeable future. We believe AI-centric solutions might be able to fulfill some of that demand, specifically in these functions:
- Prototyping:
Today, if a product or design employee at an organization has an idea, they can only go so far in mocking it up until they’ll need a developer to actually build an MVP. However, thanks to the AI app generators like what Glide or Apsy.io are building, now anyone can build a working version of their idea via plain-language descriptions. This is especially powerful as it completely alters the skills needed for someone to be a part of the development lifecycle. Similar to the low-code / no-code movement for traditional coding projects, this type of tooling already is and will continue to intersect with AI applications, and we think prototyping is a big use case for that.
- Model development and integration: We’re particularly interested in the model marketplace or “model-as-a-service” approaches that streamline the development of various learning models. Solutions like AI Squared, Gravity AI, and Modelplace.ai offer pre-packaged models that have a specific utility for a specific use case to accelerate the use of AI in the overall software development process. Note that while these companies themselves may be deemed “horizontal,” they do still facilitate the vertical adoption of AI, which is why we’re so interested in them.
We call out the areas above because that’s where we believe the most radical opportunities for improvement lie, but that’s not to say that alternate categories that have already adopted AI with open arms aren’t going to continue to see innovation. Namely, code development and testing. From GitHub’s CoPilot to Meta’s Transcoder AI to Intel’s ControlFlag, tech giants have been making major investments, and waves, here for a long time. But if we’ve learned anything from history, it’s that there will inevitably be a continued rush of startups attempting to better these capabilities that incumbents brought to market. We’re excited to see who those startups are (if you know any, let me know).
Considerations for an Investment
While the above are all categories and use cases we’d like
to invest behind, I keep reminding myself that we’re still talking about a space that, relative to its potential, is very much in the early innings yet.
There are still quite a few open questions to be answered, and we must be flexible to different and potentially new business models. To help answer some of these questions, I, like many investors, like to think about where a solution in any of the above sectors can create a viable moat. For verticalized AI solutions, there are really only three choices:
- The UI: This refers to all that contributes to the user-facing experience. Of course, this includes the “look and feel”, but also the configurability, flexibility, and ease of use of the solution. What workflows and analytics are pre-built? How granular are they? How easy are they to change, or build on top of? All of these factors can make one solution stand out over another in the near term, but I question how sustainable that is as replicability of UI is likely not that difficult in the long term.
- The Model: This refers to the proprietary nature and strength of the intelligent algorithm driving the outputs. “Strength” can be measured on a number of variables but most likely has to do with both accuracy and relevance. The fact is that while models can be highly relevant, they’re not always accurate. Even a technology as neat and powerful as ChatGPT has proven that it can spit out a completely incorrect answer disguised as something highly relevant and seemingly accurate.
Accuracy comes from both the bones and the thinking behind the model itself and, even more so, the data that trains it. On the former piece, open-source software and marketplaces stand to commoditize models. Certainly, there will be situations where better data scientists can build something better. Still, I believe much of this will be incremental, and eventually, model capability will start to converge across users.
If this happens, it would be a big win to have backed one of those solutions facilitating mass adoption (e.g., a marketplace), but identifying something truly proprietary will be challenging. So, while it sounds contrarian, AI models may not always be the moat for AI companies. That leaves the data.
- The Data: Probably self-explanatory, but this refers to the inputs driving the level of intelligence and rate of learning of an AI model. It’s important to note that volume alone is not what makes data more or less powerful, but also the quality and richness of the data.
Lots of companies have lots of data, but most can’t or don’t know how to leverage it appropriately. Solutions that can maximize this utility while optimizing quality will likely be meaningfully better than those that can’t because their models will be trained better. We think that factors such as breadth and depth of integration and data ingestion capabilities, as well as a diversity of sources through network effects, partnerships, or even synthetically generated data, can help to build a competitive advantage here.
With that, I’ll make one ask: while we as investors love to be right, we rely on knowing when we’re wrong. If you’re thinking about the promise of enterprise AI a different way or innovating in a space we’re not looking at, let us know! Drop me a note at [email protected],
and let’s geek out on how AI/ML will continue to change the way people work.