News Gist .News

Articles | Politics | Finance | Stocks | Crypto | AI | Technology | Science | Gaming | PC Hardware | Laptops | Smartphones | Archive

OpenAI Plans to Charge Up to $20,000 a Month for AI 'Agents'

OpenAI may be planning to charge up to $20,000 per month for specialized AI "agents," according to The Information. The publication reports that OpenAI intends to launch several "agent" products tailored for different applications, including sorting and ranking sales leads and software engineering. One, a high-income knowledge worker agent, will reportedly be priced at $2,000 a month.

See Also

OpenAI is reportedly planning to introduce specialized AI agents, with one such agent potentially priced at $20,000 per month aimed at high-level research applications. This pricing strategy reflects OpenAI's need to recuperate losses, which amounted to approximately $5 billion last year due to operational expenses. The decision to launch these premium products indicates a significant shift in how AI services may be monetized in the future.

OpenAI is making a high-stakes bet on its AI future, reportedly planning to charge up to $20,000 a month for its most advanced AI agents. These Ph.D.-level agents are designed to take actions on behalf of users, targeting enterprise clients willing to pay a premium for automation at scale. A lower-tier version, priced at $2,000 a month, is aimed at high-income professionals. OpenAI is betting big that these AI assistants will generate enough value to justify the price tag but whether businesses will bite remains to be seen.

OpenAI has expanded access to its latest model, GPT-4.5, allowing more users to benefit from its improved conversational abilities and reduced hallucinations. The new model is now available to ChatGPT Plus users for a lower monthly fee of $20, reducing the barrier to entry for those interested in trying it out. With its expanded rollout, OpenAI aims to make everyday tasks easier across various topics, including writing and solving practical problems.

OpenAI has begun rolling out its newest AI model, GPT-4.5, to users on its ChatGPT Plus tier, promising a more advanced experience with its increased size and capabilities. However, the new model's high costs are raising concerns about its long-term viability. The rollout comes after GPT-4.5 launched for subscribers to OpenAIโ€™s $200-a-month ChatGPT Pro plan last week.

The marketing term "PhD-level" AI refers to advanced language models that excel on specific benchmarks, but struggle with critical concerns such as accuracy, reliability, and creative thinking. OpenAI's recent announcement of a $20,000 monthly investment for its AI systems has sparked debate about the value and trustworthiness of these models in high-stakes research applications. The high price points reported by The Information may influence OpenAI's premium pricing strategy, but the performance difference between tiers remains uncertain.

OpenAI has introduced NextGenAI, a consortium aimed at funding AI-assisted research across leading universities, backed by a $50 million investment in grants and resources. The initiative, which includes prestigious institutions such as Harvard and MIT as founding partners, seeks to empower students and researchers in their exploration of AI's potential and applications. As this program unfolds, it raises questions about the balance of influence between OpenAI's proprietary technologies and the broader landscape of AI research.

GPT-4.5 offers marginal gains in capability but poor coding performance despite being 30 times more expensive than GPT-4o. The model's high price and limited value are likely due to OpenAI's decision to shift focus from traditional LLMs to simulated reasoning models like o3. While this move may mark the end of an era for unsupervised learning approaches, it also opens up new opportunities for innovation in AI.

GPT-4.5, OpenAI's latest generative AI model, has sparked concerns over its massive size and computational requirements. The new model, internally dubbed Orion, promises improved performance in understanding user prompts but may also pose challenges for widespread adoption due to its resource-intensive nature. As users flock to try GPT-4.5, the implications of this significant advancement on AI's role in everyday life are starting to emerge.

OpenAI Startup Fund has successfully invested in over a dozen startups since its establishment in 2021, with a total of $175 million raised for its main fund and an additional $114 million through specialized investment vehicles. The fund operates independently, sourcing capital from external investors, including prominent backer Microsoft, which distinguishes it from many major tech companies that utilize their own funds for similar investments. The diverse portfolio of companies receiving backing spans various sectors, highlighting OpenAI's strategic interest in advancing AI technologies across multiple industries.

Amazon plans to release companion devices for its artificially intelligent Alexa voice assistant in the fall, Chief Executive Officer Andy Jassy said in an interview with Bloomberg Television. The new devices will enable consumers to complete tasks beyond answering trivia questions, such as hiring someone to fix an oven. Amazon is also planning to charge customers for the latest Alexa capabilities, starting at $19.99 per month.

OpenAI and Oracle Corp. are set to equip a new data center in Texas with tens of thousands of Nvidia's powerful AI chips as part of their $100 billion Stargate venture. The facility, located in Abilene, is projected to house 64,000 of Nvidiaโ€™s GB200 semiconductors by 2026, marking a significant investment in AI infrastructure. This initiative highlights the escalating competition among tech giants to enhance their capacity for generative AI applications, as seen with other major players making substantial commitments to similar technologies.

Bret Taylor discussed the transformative potential of AI agents during a fireside chat at the Mobile World Congress, emphasizing their higher capabilities compared to traditional chatbots and their growing role in customer service. He expressed optimism that these agents could significantly enhance consumer experiences while also acknowledging the challenges of ensuring they operate within appropriate guidelines to prevent misinformation. Taylor believes that as AI agents become integral to brand interactions, they may evolve to be as essential as websites or mobile apps, fundamentally changing how customers engage with technology.

Meta Platforms plans to test a paid subscription service for its AI-enabled chatbot Meta AI, similar to those offered by OpenAI and Microsoft. This move aims to bolster the company's position in the AI space while generating revenue from advanced versions of its chatbot. However, concerns arise about affordability and accessibility for individuals and businesses looking to access advanced AI capabilities.

In accelerating its push to compete with OpenAI, Microsoft is developing powerful AI models and exploring alternatives to power products like Copilot bot. The company has developed AI "reasoning" models comparable to those offered by OpenAI and is reportedly considering offering them through an API later this year. Meanwhile, Microsoft is testing alternative AI models from various firms as possible replacements for OpenAI technology in Copilot.

OpenAI CEO Sam Altman has revealed that the company is "out of GPUs" due to rapid growth, forcing it to stagger the rollout of its new model, GPT-4.5. This limits access to the expensive and enormous GPT-4.5, which requires tens of thousands more GPUs than its predecessor, GPT-4. The high cost of GPT-4.5 is due in part to its size, with Altman stating it's "30x the input cost and 15x the output cost" of OpenAI's workhorse model.

OpenAI CEO Sam Altman has announced a staggered rollout for the highly anticipated ChatGPT-4.5, delaying the full launch to manage server demand effectively. In conjunction with this, Altman proposed a controversial credit-based payment system that would allow subscribers to allocate tokens for accessing various features instead of providing unlimited access for a fixed fee. The mixed reactions from users highlight the potential challenges OpenAI faces in balancing innovation with user satisfaction.

Mistral AI, a French tech startup specializing in AI, has gained attention for its chat assistant Le Chat and its ambition to challenge industry leader OpenAI. Despite its impressive valuation of nearly $6 billion, Mistral AI's market share remains modest, presenting a significant hurdle in its competitive landscape. The company is focused on promoting open AI practices while navigating the complexities of funding, partnerships, and its commitment to environmental sustainability.

Nvidia delivered another record quarter amid surging artificial intelligence (AI) demand, posting Q4 revenue of $39.3 billion, up 78% year-over-year, and providing strong guidance for continued growth. The new Blackwell architecture saw remarkable initial uptake, with $11 billion in revenue during its first quarter of availability, representing the fastest product ramp in Nvidia's history. This significant milestone demonstrates the company's ability to execute at scale and meet high demand for AI-powered solutions.

A recent survey reveals that 93% of CIOs plan to implement AI agents within two years, emphasizing the need to eliminate data silos for effective integration. Despite the widespread use of numerous applications, only 29% of enterprise apps currently share information, prompting companies to allocate significant budgets toward data infrastructure. Utilizing optimized platforms like Salesforce Agentforce can dramatically reduce the development time for agentic AI, improving accuracy and efficiency in automating complex tasks.

Developers can access AI model capabilities at a fraction of the price thanks to distillation, allowing app developers to run AI models quickly on devices such as laptops and smartphones. The technique uses a "teacher" LLM to train smaller AI systems, with companies like OpenAI and IBM Research adopting the method to create cheaper models. However, experts note that distilled models have limitations in terms of capability.

The UK's Competition and Markets Authority has dropped its investigation into Microsoft's partnership with ChatGPT maker OpenAI due to a lack of de facto control over the AI company. The decision comes after the CMA found that Microsoft did not have significant enough influence over OpenAI since 2019, when it initially invested $1 billion in the startup. This conclusion does not preclude competition concerns arising from their operations.

GPT-4.5 is OpenAI's latest AI model, trained using more computing power and data than any of the company's previous releases, marking a significant advancement in natural language processing capabilities. The model is currently available to subscribers of ChatGPT Pro as part of a research preview, with plans for wider release in the coming weeks. As the largest model to date, GPT-4.5 has sparked intense discussion and debate among AI researchers and enthusiasts.

Chinese AI startup DeepSeek on Saturday disclosed some cost and revenue data related to its hit V3 and R1 models, claiming a theoretical cost-profit ratio of up to 545% per day. This marks the first time the Hangzhou-based company has revealed any information about its profit margins from less computationally intensive "inference" tasks, the stage after training that involves trained AI models making predictions or performing tasks. The revelation could further rattle AI stocks outside China that plummeted in January after web and app chatbots powered by its R1 and V3 models surged in popularity worldwide.

OpenAI has launched GPT-4.5, a significant advancement in its AI models, offering greater computational power and data integration than previous iterations. Despite its enhanced capabilities, GPT-4.5 does not achieve the anticipated performance leaps seen in earlier models, particularly when compared to emerging AI reasoning models from competitors. The model's introduction reflects a critical moment in AI development, where the limitations of traditional training methods are becoming apparent, prompting a shift towards more complex reasoning approaches.