The Trump administration announced on Friday that it had canceled $400 million in federal grants and contracts to Columbia University. The White House justified its decision by blaming the universityâs failure to protect Jewish students from harassment during protests last year over the war in Gaza. The announcement escalated the administrationâs targeting of Columbia, where pro-Palestinian protests last year over the war in Gaza set off a nationwide debate over free speech, campus policing, and antisemitism and led to similar demonstrations at schools nationwide. The move also represents the latest response by the Trump Administration to elite higher educational institutions. It follows last yearâs congressional hearings that resulted in the departure of Harvard and the University of Pennsylvania presidents. It comes after recent executive orders barring diversity, equity, inclusion, and woke programs at educational institutions that receive federal funds. On Monday, Linda McMahon, the newly confirmed secretary of education, issued a warning that the administration had its sights set on Columbia. Ms. McMahon warned that Columbia would face the loss of federal funding, the lifeblood of major research universities, if it did not take additional action to combat antisemitism on campus. Student Loans On the other hand, yesterday, speaking from the Oval Office, President Trump said he would soon sign an executive order directing the Education Department to modify the Public Service Loan Forgiveness Program, which forgives a portion of federal student loan debt for people who work in public sector jobs, including at nonprofit organizations. Trump alleges that some qualifying nonprofit organizations may âengage in illegal, or what we would consider to be improper activities."
IBL News | New York Google announced a free Gemini Code Assist for individuals in public preview. It is powered by the Gemini 2.0 model and âwith the latest AI capabilities.â It can generate entire code blocks and supports 38 programming languages. Developers can instruct Gemini Code Assist using a chat interface by asking, for example, to âbuild me a simple HTML form with fields for name, email, and message, and then add a âsubmitâ button.â With this offer, Google targets GitHub Copilot, its most direct competitor. GitHub Copilot also provides a free tier of 2,000 code completions and 50 monthly Copilot Chat messages. Meanwhile, Google offers up to 180,000 code completions per month, âa ceiling so high that even todayâs most dedicated professional developers would be hard-pressed to exceed it,â said Ryan J. Salva, Googleâs senior director of product management. The free Individual tier doesnât include advanced business-focused features available in the Standard and Enterprise versions, such as productivity metrics, integrations with Google Cloud BigQuery services, or customized responses using private code data.
Anthropic released Claude 3.7 Sonnet, its most advanced AI model, this week. According to the company: ⢠"Claude 3.7 Sonnet, the first hybrid reasoning model on the market, can produce near-instant responses or extended, step-by-step thinking that is made visible to the user." ⢠"API users also have fine-grained control over how long the model can think for." ⢠"Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development." Reasoning models like o3-mini, R1, Googleâs Gemini 2.0 Flash Thinking, and xAIâs Grok 3 (Think) use more time and computing power before answering questions. Claude 3.7 Sonnet is now available on all Claude plansâincluding Free, Pro, Team, and Enterpriseâand the Anthropic API, Amazon Bedrock, and Google Cloudâs Vertex AI. Extended thinking mode is available on all surfaces except the free Claude tier. Its price is the same as its predecessors (the company skipped a number): $3 per million input tokens and $15 per million output tokens, which include thinking tokens. That makes it more expensive than OpenAIâs o3-mini ($1.10 per 1 million input tokens/$4.40 per 1 million output tokens), and DeepSeekâs R1 (55 cents per 1 million input tokens/$2.19 per 1 million output tokens), but o3-mini and R1 are strictly reasoning models â not hybrids like Claude 3.7 Sonnet. In addition, Anthropic introduced, as a preview, Claude Code, a command line tool for agentic coding. The company said: "Early testing demonstrated Claudeâs leadership in coding capabilities across the board: Cursor noted Claude is once again best-in-class for real-world coding tasks, with significant improvements in areas ranging from handling complex codebases to advanced tool use. Cognition found it far better than any other model at planning code changes and handling full-stack updates. Vercel highlighted Claudeâs exceptional precision for complex agent workflows, while Replit has successfully deployed Claude to build sophisticated web apps and dashboards from scratch, where other models stall. In Canvaâs evaluations, Claude consistently produced production-ready code with superior design taste and drastically reduced errors."
YouTube integrated this month for its Shorts creators Google DeepMindâs latest text-to-video model generator, Veo 2. Veo 2, which is Googleâs response to OpenAIâs Sora, allows users to generate AI backgrounds for their Shorts through a feature called Dream Screen. [See an example below] To use Veo 2 in YouTube Shorts, creators can open the Shorts camera, select Green Screen, and then navigate to Dream Screen, where they can input a text prompt to generate a video. YouTube uses a watermark tool called SynthID to indicate that videos are generated using AI. YouTube is also launching another capability powered by Veo 2, which allows users to generate standalone video clips via text prompts that can be added to any Shorts. To create a clip to add to any Short, users can open the Shorts camera, tap Add, and Create at the top. After inputting their prompt, they select their image, tap Create video, and choose their desired length. These features were available in the U.S., Canada, Australia, and New Zealand. YouTube plans to expand access later.
Researchers at Stanford and the University of Washington said in a paper released this month that they were able to train an AI reasoning model called s1, which performed similarly to OpenAIâs o1 and DeepSeekâs R1 on math and coding. The s1 model, along with the data and code, is available on GitHub. According to the researchers, its training costs less than $50 in cloud computing credits. This team started with an off-the-shelf base model and then fine-tuned it through distillation, a process for extracting the âreasoningâ capabilities from another AI model by training on its answers. The model was distilled from Gemini 2.0 Flash Thinking Experimental, offered for free via the Google AI Studio platform. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month. OpenAI has accused DeepSeek of improperly harvesting data from its API for model distillation. Distillation is a suitable method for cheaply re-creating an AI modelâs capabilities, but it doesnât create new AI models. The s1 paper suggested that reasoning models can be distilled with a relatively small dataset using supervised fine-tuning (SFT), in which an AI model is explicitly instructed to mimic certain behaviors in a dataset. More specifically, s1 was based on a small, free AI model from Alibaba-owned Chinese AI lab Qwen. To train s1, the researchers created a dataset of just 1,000 carefully curated questions paired with answers to those questions and the âthinkingâ process behind each answer from Googleâs Gemini 2.0 Flash Thinking Experimental. After training s1, which took less than 30 minutes using 16 Nvidia H100 GPUs, s1 achieved strong performance on specific AI benchmarks. Per the paper, researchers used a nifty trick to get s1 to double-check its work and extend its âthinkingâ time: They told it to wait. Adding the word âwaitâ during s1âs reasoning helped the model arrive at slightly more accurate answers, Experts said that s1 raises fundamental questions about the commoditization of AI models.