In the ever-evolving landscape of artificial intelligence, the recent history of A.I. chatbots can be neatly divided into two distinct phases. The first phase, initiated with the release of ChatGPT and its counterparts, showcased chatbots capable of discussing a wide array of topics. Whether it’s Greek mythology, vegan recipes, or Python scripts, these chatbots demonstrated an impressive ability to generate convincing text, albeit occasionally with generic or inaccurate content.
However, this initial capability was merely a prelude to the second phase – an era where A.I. would transition from merely talking about things to actively doing them. Tech companies have long promised a future where A.I. “agents” would handle tasks such as sending emails, scheduling meetings, booking reservations, and even tackling complex challenges like negotiating a raise or shopping for Christmas presents.
This transition came a step closer recently when OpenAI, the creator of ChatGPT, announced a significant development – the ability for users to create their personalized chatbots, referred to as GPTs. These custom chatbots differ from the standard ChatGPT in crucial ways.
Firstly, they are specifically programmed for distinct tasks, with examples ranging from a “Creative Writing Coach” to a “Mocktail Mixologist” that suggests nonalcoholic drink recipes. Secondly, these bots can pull information from private data sources, such as a company’s internal H.R. documents or a real estate listing database, incorporating this data into their responses. Thirdly, users can allow the bots to integrate with other aspects of their online lives, such as calendars, to-do lists, and Slack accounts, enabling them to perform actions using the user’s credentials.
While this advancement holds immense potential for increased efficiency and convenience, it also raises concerns, particularly among A.I. safety researchers. The fear is that granting bots more autonomy might lead to unintended consequences, with the potential for malicious actors to create rogue A.I.s with dangerous goals. The Center for AI Safety has listed autonomous agents as one of the “catastrophic A.I. risks,” emphasizing the need for caution in this rapidly advancing field.
OpenAI’s custom GPTs, however, appear to be relatively benign. Demonstrations during the company’s developer conference showcased the bots automating tasks like creating colouring pages or explaining card game rules. They currently excel at simple, well-defined tasks but struggle with complex planning or extended sequences of actions.
The ability to customize chatbots represents a significant step in OpenAI’s strategy of “gradual iterative deployment.” Instead of making big leaps over extended periods, the company focuses on releasing small improvements to A.I. at a fast pace. While the current custom GPTs have limitations, OpenAI envisions a future where users can offer their personalized chatbots to the public through an app store, sharing revenue with successful creators.
To explore the potential of these custom GPTs, I had the opportunity to create a couple of personalized chatbots. The first, “Day Care Helper,” was designed to assist with questions about my son’s daycare. The second, “Grandpa Roose’s Financial Advice,” drew inspiration from a booklet written by my grandfather, an economist and stock picker.
Despite their limitations, these chatbots showcased glimpses of the roles autonomous A.I. agents could play in various fields. Imagine the impact on a company’s benefits department, where routine inquiries could be handled by a chatbot, allowing human administrators to focus on more strategic tasks. Similarly, customer service departments could benefit from A.I. agents responding to a majority of requests, potentially reducing the need for a large human-staffed support team.
While the current wave of custom GPTs seems relatively harmless, the implications of making A.I. agents more autonomous and embedding them into various aspects of our lives are profound. As A.I.s evolve to understand users on a deep level, there’s the potential for them to perform complex actions with or without direct oversight.
If OpenAI’s vision holds, we may be on the brink of a world where A.I.s become extensions of ourselves – artificial satellite brains capable of navigating the world, gathering information, and taking action on our behalf. As we stand at the precipice of this technological frontier, the question arises: Are we ready for this next phase of A.I. integration into our lives? The future might arrive sooner than we think, and preparation could be the key to navigating this brave new world of autonomous A.I. agents.