Two operational concepts–the “eager intern” and the “autonomous agent”–can help jumpstart your AI strategy.
By Bryan Kirschner, Vice President, Strategy at DataStax
Bill Gates has seen (or, for that matter, caused) some profound advances in technology, so I don’t take a contrarian position lightly, but I think the way he describes his epiphany about the importance of AI is only half right.
After being “awed” by OpenAI’s GPT model acing the AP Bio exam, the model was asked a non-technical question: “What do you say to a father with a sick child?” Gates describes the results this way: “It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.”
I don’t dispute that. As a user of ChatGPT to both get work done faster and kick the tires on what it can do, I’ve been impressed (it replied to a prompt to “tell me about Aristotle in the style of Roy Kent,” the expletive-prone “Ted Lasso” character, with uncanny flair).
But as we all shape business strategy around the implications of generative AI, we also need to look 180 degrees away from concepts like “stunning” or “uncanny” toward “purpose-built,” “predictable,” and “productive.”
That’s because we’d absolutely expect a model trained on (say) 10,000 sympathy cards or 1,000 eulogies to come across as sensitive, consoling, and well-spoken, hitting the right tone better than most of us could do on the fly. It should be entirely unsurprising–at least for people of the cultural or religious background for whom the original content was produced.
For all the risks of hallucinations or bad behavior from models trained on the open internet, generative AI strategy in all our organizations is about unlocking the potential of well-intentioned people to create well-intentioned AIs tailored to their specific context. Fine-tuning models that run “on top” of foundation models requires less data, costs less, and can be completed quickly.
Marc Andreesen provides an evocative example of what is well within reach technically:
Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
Tomorrow’s most successful organizations will have tens or even hundreds of AIs working alongside and on behalf of their human staff in planful, constructive ways. Two operational concepts–the “eager intern” and the “autonomous agent”–can help jumpstart your journey.
AI as an “eager intern”
Business school professor and technologist Ethan Mollick offers what I’ve found to be very useful framing for how to think about generative AI: “It is not good software, [rather] it is pretty good people.”
And rather than thinking about AIs as people who replace those already on the payroll, treat them like “eager interns” that can help them be more productive.
This metaphor can help on two fronts. First, it keeps the need for human supervision front and center. Just as hiring and productively managing interns is a valuable competency for an organization, so too is using ChatGPT, Microsoft’s CoPilot, or Google’s Bard. But you would no more blindly trust this class of model than you would even the most promising intern.
Second, and as important: IT isn’t responsible for hiring interns in Finance and HR. Likewise, Finance and HR (and every other function) must build their own competency i figuring out how to use these tools to be more productive. The job to be done is closer to answering domain-specific staffing questions than IT questions.
This is table stakes on the path to the breakthrough in productivity: “autonomous agents.”
Agents of productivity
Autonomous agents chain together tools so the AI, once given an objective, can create tasks, complete tasks, create new tasks, reprioritize the task list, complete the new top task, and loop until the objective is reached. (This is a good introduction to use cases that includes an example of how something like Andreesen’s infinitely patient math tutor might be built.)
But if you’re a CEO who wants to accelerate getting to “AI for all,” I recommend taking 10 minutes with your leadership team to read my colleague Ed Anuff’s explanation of how a consumer-focused agent could be built today. Here’s a key excerpt:
You want to build a deck in your backyard, so you open your home-improvement store’s mobile application and ask it to build you a shopping list. Because the application is connected to an LLM like GPT-4 and many data sources (the company’s own product catalog, store inventory, customer information and order history, along with a host of other data sources), it can easily tell you what you’ll need to complete your DIY project. But it can do much more.
If you describe the dimensions and features you want to include in your deck, the application can offer visualization tools and design aids. Because it knows your postal ZIP code, it can tell you which stores within your vicinity have the items you need in stock. It can also, based on the data in your purchase history, suggest that you might need a contractor to help you with the job — and provide contact information for professionals near you.
This type of experience is not just the future for your customers. It needs to be the future of all your employees, too. How can AI help marketers track your brand on social media? How can it assist legal teams with contracts? How can it help HR recruit, hire, and develop people?
Your functional teams and business units should be gaming out ideas and getting started on autonomous agents today. There’s no time like the present to get more productive: The technology is ready and waiting.
Learn more about how DataStax enables real-time AI here.
About Bryan Kirschner:
Bryan is Vice President, Strategy at DataStax. For more than 20 years he has helped large organizations build and execute strategy when they are seeking new ways forward and a future materially different from their past. He specializes in removing fear, uncertainty, and doubt from strategic decision-making through empirical data and market sensing.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/643327/hate-being-more-productive-ignore-ai-agents.html