Posts

Showing posts from September, 2024

I am moving back to the Google platform, less excited by what Apple is offering

I have been been playing with the Apple Intelligence beta’s in iPadOS and macOS and while I like the direction Apple is heading I am getting more use from Google’s Gemini, both for general analysis of very large input contexts, as well as effective integration my content in Gmail, Google Calendar, and Google Docs. While I find the latest Pixel phone to be compelling, I will stick with Apple hardware since I don’t want to take the time to move my data and general workflow to a Pixel phone. The iPhone is the strongest lock-in that Apple has on me because of the time investment to change. The main reason I am feeling less interested in the Apple ecosystem and platform is that I believe that our present day work flows are intimately wrapped up with the effective use of LLMs, and it is crazy to limit oneself to just one or two vendors. I rely on running local models on Ollama, super fast APIs from Groq (I love Groq for running most of the better open weight models), and other APIs from Mist

Getting closer to AGI? Google's NoteBookLM and Replit's AI Coding Agent

Putting "closer to AGI?" in a blog title might border on being clickbait, but I will argue that it is not! I have mostly earned my living in the field of AI since 1982 and I argue that the existence of better AI driven products and the accelerating rate of progress in research, that we are raising the bar on what we consider AGI to be. I have had my mind blown twice in the last week: Today I took the PDF for my book "Practical Artificial Intelligence Programming With Clojure ( you can read it free online here ) and used it to create a notebook in Google's  NotebookLM  and asked for a generated 8 minute podcast. This experimental app created a podcast with two people discussing my book accurately and showing wonderful knowledge of technology. If you want to listen to the audio track that Google's NotebookLM created,  here is a link to the WAV audio file Last week I signed up for a one year plan on  Replit.com  after trying the web based IDE for Haskell and Python

New OpenAI gpt-o1-preview and gpt-o1-mini and one week experience with Replit.com AI Coding Agent

 I have only spent a short while experimenting with the ne gtp-o1 models: so far very impressive for science, math, and instruction following. You need a ChatGPT Plus account to try it, or you can perform rate limited queries for half the monthly cost using  Abacus AI The thing I am most impressed with (this week!) is the  Replit.co AI coding agent  that after briefly trying it I pre-paid for a one year subscription. I quickly rewrote a complex Clojure web app I wrote 12 years ago in JavaScript making it much less expensive to host  CookingSpace.com  (new JavaScript version) I gave a live demo of Replit AI in my weekly AI demo and group chat. Please join:  Mark Watson's informal AI presentations and group chats

Topics: Recipe: Mark’s African Stew, and converting my Clojure CookingSpace web site to JavaScript

I wanted to convert my server side web site  CookingSpace.com  to mostly client side JavaScript. I used the Replit.com AI coding agent to do this, and while this is a subject for another blog article, I showed the Replit coding AI a snippet of my JSON recipes and nutrients file, and described in a few paragraphs the functionality of the new web site. I want to share a recipe that I created:  Mark’s African Stew Ingredients Optional: 3/4 to 1 pound of Lamb shoulder, deboned and cut into cubes, leaving some fat on the meat 2 tablespoons of any non-seed oil (Avocado Oil is good). (Avoid seed oils like Canola, Corn, Soybean, Sunflower, etc.) 1 brown onion, coarsely chopped 1 egg plant, partially skinned,  chopped into bite sized pieces, and heavily salt in a strainer. Before cooking, rinse well to remove excess salt! ground coriander (at least 1 teaspoon) Cumin  (at least 2/3 teaspoon) paprika (1 to 2 teaspoons, depending on how you like the taste of paprika) ginger powder ( at least 1/2 t

Code and notes from my recent talk: Exploring the Future of AI: Introduction to using LLMs using Python

Topics: Large context prompts with LLMs vs. RAGs using embeddings vector stores. How to avoid LLM hallucination. 3 code demos. started an informal code demo and group conversation  Meetup group (link)  and today I gave a fifteen minute code demo followed by a conversation with the attendees. Here is a GitHub repo with the code examples:  https://github.com/mark-watson/talk2_LLM_Python_intro Here are my talk notes: Exploring the Future of AI: Introduction to using LLMs using Python Riff on ‘AI grounding’ and how LLMs help:  LLMs, trained on vast amounts of text, excel at recognizing patterns and providing contextually relevant responses. They mimic grounded understanding by referencing large datasets that encompass a variety of real-world scenarios. For example, they can infer meanings from complex contexts by drawing on their training data. When LLMs are integrated with other modalities, such as vision or audio (e.g., vision-language models), the grounding improves. These models can as