![Anthony Alford Profile](https://pbs.twimg.com/profile_images/1023249651938385920/onj1EZ_p_x96.jpg)
Anthony Alford
@anthony_alford
Followers
87
Following
174
Statuses
361
Joined February 2010
Introducing ChatGPT Search! ChatGPT can now incorporate current information from the web and include links to its sources. Read more in my latest @InfoQ news.
OpenAI Releases ChatGPT Search Feature by @anthony_alford
0
0
0
Can an AI coding assistant help you? A recent study suggests developers could increase productivity by 26%. Read more in my latest @InfoQ news!
Study Shows AI Coding Assistant Improves Developer Productivity by @anthony_alford
0
0
0
#Doom enthusiasts know the game will run on just about anything. Now Google has it running in a neural network. Read more in my latest @InfoQ news!
Google Announces Game Simulation AI GameNGen by @anthony_alford
0
0
0
Alibaba released Qwen2-Math, a series of LLMs tuned for solving mathematical problems; and Qwen2-Audio, a family of multi-modal LLMs that can accept voice or text input. Read more in my latest @InfoQ news!
Alibaba Releases Two Open-Weight Language Models for Math and Voice Chat by @anthony_alford
0
0
0
Maybe you've heard about #AppleIntelligence (AI? well played, Apple!). Read my latest @InfoQ news about the Apple Foundation Models that power several of its features.
Apple Unveils Apple Foundation Models Powering Apple Intelligence by @anthony_alford
0
0
0
Kolmogorov–Arnold Networks (KAN) models, a new type of neural network, outperform larger perceptron-based models on physics modeling tasks and provide a more interpretable visualization. Read more in my latest @InfoQ news!
University Researchers Create New Type of Interpretable Neural Network by @anthony_alford
0
0
0
RT @InfoQ: InfoQ AI, ML, and Data Engineering Trends in 2024 by @srinip, @rolandmeertens, @anthony_alford, @domingu…
0
3
0
RT @InfoQ: University of Pennsylvania Researchers Develop Processorless Learning Circuitry by @anthony_alford
0
1
0
Three new open-weight #LLMs from Mistral AI: Mistral NeMo, a 12B parameter general-purpose LLM; Codestral Mamba, a 7B parameter code-generation model; and Mathstral, a 7B parameter model fine-tuned for math and reasoning. Read more in my latest @InfoQ news!
Mistral AI Releases Three Open-Weight Language Models by @anthony_alford
0
0
0
No joke: Google's JEST automates dataset curation, so that models trained on the data require 10x less computation than baseline methods. Read more in my latest @InfoQ news!
Google's JEST Algorithm Automates AI Training Dataset Curation and Reduces Training Compute by @anthony_alford
0
0
1
OpenAI's latest #ChatGPT update is out: GPT-4o mini. This is a smaller, faster, cheaper model, that outperforms GPT-3.5 turbo. Read more in my latest @InfoQ news!
OpenAI Releases GPT-4o mini Model with Improved Jailbreak Resistance by @anthony_alford
0
0
0
Google's new open-source #LLM, Gemma 2, outperforms other models of comparable size and is competitive with models 2x larger. Read more in my latest @InfoQ news!
Google Open Sources 27B Parameter Gemma 2 Language Model by @anthony_alford
0
0
0
#ChatGPT is pretty good at writing code, but it's not perfect. OpenAI created CriticGPT to help find bugs in ChatGPT-generated code. CriticGPT catches more bugs and produces better critiques than human coders. Read more in my latest @InfoQ news!
OpenAI's CriticGPT Catches Errors in Code Generated by ChatGPT by @anthony_alford
0
0
2
No @InfoQ news from me this week, BUT! We have published our Generative AI e-magazine! I'm very excited about this one, and I hope you enjoy it. Check it out!
Practical Applications of Generative AI authored by @InfoQ, reviewed by @anthony_alford
0
0
1
One caveat with most #LLMs is the context-length limit: you can only input so much data, or only have a limited-length conversation with a chatbot. Meta's new LLM, MEGALODON, addresses this with an unlimited context length. Read more in my latest @InfoQ news!
Meta Open-Sources MEGALODON LLM for Efficient Long Sequence Modeling by @anthony_alford
0
0
1
OpenAI has published their Model Spec that describes rules and objectives for the behavior of their #GPT models. It's intended for use in creating data for fine-tuning the models. Read more in my latest @InfoQ news!
OpenAI Publishes GPT Model Specification for Fine-Tuning Behavior by @anthony_alford
0
0
0
The Stanford AI Index report is out, so you can keep up with top trends in AI, such as 8x growth in Generative AI investment since 2022. Read more in my latest @InfoQ news!
Stanford AI Index 2024 Report: Growth of AI Regulations and Generative AI Investment by @anthony_alford
0
0
0
OpenAI releases their newest model: GPT-4o...it's faster and has improved capabilities in handling speech, vision, and multilingual tasks. Read more in my latest @InfoQ news!
OpenAI Announces New Flagship Model GPT-4o by @anthony_alford
0
0
0
Apple joins the #LLM race: their OpenELM model uses a scaled-attention mechanism for more efficient parameter allocation and outperforms similarly-sized models while requiring fewer tokens for training. Read more in my latest @InfoQ news!
Apple Open-Sources One Billion Parameter Language Model OpenELM by @anthony_alford
0
0
0
It's here! Meta has open-sourced #llama3 ... and it's a "major leap" over Llama 2! With millions of downloads and hundreds of derived models, it's clearly going to be big. Read more in my latest @InfoQ news!
Meta Releases Llama 3 Open-Source LLM by @anthony_alford
0
0
0