Mike Krieger may be best known as the co-founder of Instagram, but he’s now the chief product officer at Anthropic, which makes Claude, a business-focused ChatGPT competitor with an equally powerful AI model.
Claude exists as a free, standalone chatbot, though Anthropic also offers paid plans for individuals and large organizations. Today, Anthropic teased a new capability that could keep people using Claude all day or even automate parts of their work entirely.
Called “computer use”, the feature integrates Claude with the applications you already use and controls your computer “the way people do – by looking at a screen, moving a cursor, clicking buttons and typing text”.
Desktop usage is currently limited to developers who sign up for the Claude API. But this comes about five months after Krieger joined Anthropic and received his marching orders from CEO Dario Amodei: figure out how to make Claude useful at work.
Krieger tells PCMag that the platform has “millions” of individual users, but its main focus is growing the number of business accounts — already in the “tens of thousands” — by positioning its AI as an “invaluable assistant.” The problem is that most companies are still trying to figure out how to use AI and move it from a noisy moonshot to a revenue driver.
In an email, Krieger offered three tips for companies looking to get started with AI.
-
Identify specific ways AI can add immediate value. “Start with small pilot projects that have clear objectives and measurable outcomes to gain experience and demonstrate ROI before scaling,” he writes.
-
Invest in education and training throughout your organization. “Ensure employees understand the capabilities and limitations of AI for effective implementation and responsible use.”
-
Prioritize ethics and data security from the start. “Establish clear guidelines for the use of AI that align with company values and comply with regulations. Remember, responsible adoption of AI involves fostering a culture of innovation balanced with ethical awareness.”
Ethical considerations are multifaceted. Are these artificial intelligences putting people out of work, training on stolen data or providing completely incorrect answers? Will people become unable to search for information on their own and become as dependent on chatbots as they are on TikTok, Instagram or X? Krieger argues that Claude will not be as addictive as social media.
Claude’s homepage (Credit: Anthropic)
“Claude provides efficient, task-oriented support in various applications, delivering useful results and allowing users to continue working independently,” he says. “Our models…are designed to increase productivity and problem solving, not to attract attention or create addiction.”
Krieger insists that Anthropic is “committed to responsible AI development, guided by rigorous policies, extensive testing and collaboration with external experts.”
Recommended by our Editors
Anthropic’s CEO supported California’s now-dead AI security bill, which would have introduced new security and accountability mechanisms for large AI systems. At the same time, a group of authors is suing Anthropic for training its AI on copyrighted books, which the company has not publicly addressed.
IT brings with it a laundry list of technical unknowns. AI works differently than “traditional software,” says Krieger, as evidenced by how models like Claude evolve based on user feedback. While Instagram “likes” mostly feed the poster’s ego, a thumbs up or down for a Claude answer can fundamentally change how the system answers your next question.
This concept, called reinforcement learning, makes AI even more powerful, providing “increasingly valuable business solutions — a capability far beyond social media platforms,” says Krieger. In five years, he expects Claude to “achieve expert-level competence” in many fields, such as health care, finance and engineering. Is your business ready to put Claude on the payroll?
Get our best stories!
Register for What’s new now? to get our top stories delivered to your inbox every morning.
This newsletter may contain advertisements, deals or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You can unsubscribe from newsletters at any time.