Posts

I am moving back to the Google platform, less excited by what Apple is offering

I have been been playing with the Apple Intelligence beta’s in iPadOS and macOS and while I like the direction Apple is heading I am getting more use from Google’s Gemini, both for general analysis of very large input contexts, as well as effective integration my content in Gmail, Google Calendar, and Google Docs. While I find the latest Pixel phone to be compelling, I will stick with Apple hardware since I don’t want to take the time to move my data and general workflow to a Pixel phone. The iPhone is the strongest lock-in that Apple has on me because of the time investment to change. The main reason I am feeling less interested in the Apple ecosystem and platform is that I believe that our present day work flows are intimately wrapped up with the effective use of LLMs, and it is crazy to limit oneself to just one or two vendors. I rely on running local models on Ollama, super fast APIs from Groq (I love Groq for running most of the better open weight models), and other APIs from Mist...

Getting closer to AGI? Google's NoteBookLM and Replit's AI Coding Agent

Putting "closer to AGI?" in a blog title might border on being clickbait, but I will argue that it is not! I have mostly earned my living in the field of AI since 1982 and I argue that the existence of better AI driven products and the accelerating rate of progress in research, that we are raising the bar on what we consider AGI to be. I have had my mind blown twice in the last week: Today I took the PDF for my book "Practical Artificial Intelligence Programming With Clojure ( you can read it free online here ) and used it to create a notebook in Google's  NotebookLM  and asked for a generated 8 minute podcast. This experimental app created a podcast with two people discussing my book accurately and showing wonderful knowledge of technology. If you want to listen to the audio track that Google's NotebookLM created,  here is a link to the WAV audio file Last week I signed up for a one year plan on  Replit.com  after trying the web based IDE for Haskell and Python...

New OpenAI gpt-o1-preview and gpt-o1-mini and one week experience with Replit.com AI Coding Agent

 I have only spent a short while experimenting with the ne gtp-o1 models: so far very impressive for science, math, and instruction following. You need a ChatGPT Plus account to try it, or you can perform rate limited queries for half the monthly cost using  Abacus AI The thing I am most impressed with (this week!) is the  Replit.co AI coding agent  that after briefly trying it I pre-paid for a one year subscription. I quickly rewrote a complex Clojure web app I wrote 12 years ago in JavaScript making it much less expensive to host  CookingSpace.com  (new JavaScript version) I gave a live demo of Replit AI in my weekly AI demo and group chat. Please join:  Mark Watson's informal AI presentations and group chats

Topics: Recipe: Mark’s African Stew, and converting my Clojure CookingSpace web site to JavaScript

I wanted to convert my server side web site  CookingSpace.com  to mostly client side JavaScript. I used the Replit.com AI coding agent to do this, and while this is a subject for another blog article, I showed the Replit coding AI a snippet of my JSON recipes and nutrients file, and described in a few paragraphs the functionality of the new web site. I want to share a recipe that I created:  Mark’s African Stew Ingredients Optional: 3/4 to 1 pound of Lamb shoulder, deboned and cut into cubes, leaving some fat on the meat 2 tablespoons of any non-seed oil (Avocado Oil is good). (Avoid seed oils like Canola, Corn, Soybean, Sunflower, etc.) 1 brown onion, coarsely chopped 1 egg plant, partially skinned,  chopped into bite sized pieces, and heavily salt in a strainer. Before cooking, rinse well to remove excess salt! ground coriander (at least 1 teaspoon) Cumin  (at least 2/3 teaspoon) paprika (1 to 2 teaspoons, depending on how you like the taste of paprika) ginger...

Code and notes from my recent talk: Exploring the Future of AI: Introduction to using LLMs using Python

Topics: Large context prompts with LLMs vs. RAGs using embeddings vector stores. How to avoid LLM hallucination. 3 code demos. started an informal code demo and group conversation  Meetup group (link)  and today I gave a fifteen minute code demo followed by a conversation with the attendees. Here is a GitHub repo with the code examples:  https://github.com/mark-watson/talk2_LLM_Python_intro Here are my talk notes: Exploring the Future of AI: Introduction to using LLMs using Python Riff on ‘AI grounding’ and how LLMs help:  LLMs, trained on vast amounts of text, excel at recognizing patterns and providing contextually relevant responses. They mimic grounded understanding by referencing large datasets that encompass a variety of real-world scenarios. For example, they can infer meanings from complex contexts by drawing on their training data. When LLMs are integrated with other modalities, such as vision or audio (e.g., vision-language models), the grounding improves. ...

My Dad's work with Robert Oppenheimer and Edward Teller

Robert Oppenheimer and Edward Teller facilitated my Dad getting a professorship at UC Berkeley when I was 3 years old. Oppenheimer left Berkeley but Teller was a good friend of my father and I remember him being in our home. Three weeks ago, my wife and I were just leaving to see the new Oppenheimer movie when my Dad called. He mentioned that when I was in grade school he was invited to give a talk at Princeton. After his talk Oppenheimer talked with my Dad and invited him to have dinner at his house. My Dad said Oppenheimer was not well (I think he died of throat cancer soon afterwards) but his wife Kitty carried the conversation. My Dad, Ken Watson, passed away 10 days ago on August 18, 2023.

Time and Attention Fragmentation in Our Digital Lives

  As humans we have evolved over a few million years to be both attentive and reactive to danger, live in social communities, and spend much of our time being in the present moment gathering and eating food and socializing. The behavior of rapidly changing short attention to content on social media, too many good short form things to watch on streaming video entertainment platforms, are all rewiring our brains in an unnatural and unhealthy way. I fight back, but in really simple ways that entail little ceremony: Almost every morning I spend 30 minutes scanning Hacker News (about 10 minutes), Apple News (about 5 minutes), and the remaining time on Twitter and Mastodon finding interesting new (mostly tech) things. I make notes in a temporary Apple Note: links of things I may want to research, try, or simply read that day. I like to get this all done at once, and then not feel like I need to interrupt my activities during the day to “catch up” on what is happening in the world. In a w...

ChatGPT as part of the evolution of programming languages

In the 1940s von Neumann and his colleagues created conceptual models for computer architectures that were oriented toward the engineering problems of building computing devices, and not for making it easier for humans to write programs. The Lambda Calculus and also the design of the Prolog programming language are the first real efforts that I am aware of to place emphasis on how we humans think and solve problems. I  had a thought earlier today that I keep coming back to: there are concise programming languages that can be more difficult to write code with, but once done the code is more valuable because of its conciseness that yields better readability. I have been fascinated by, and use Copilot and ChatGPT to write code and sometimes it works well enough. What will the effects of ChatGPT and future LLMs be on the popularity of niche languages like Prolog and APL? All things considered I would often rather have a concise program in Prolog or a flavor of Lisp than a much larger ...

Elon Musk & Twitter, Tech job market, my writing projects

I find it sad that Musk's purchase of Twitter is turning out so badly. I read that advertisement pre-sales for next year are very low, thus the urgent need to cut expenses. I don't dissagree with Elon Musk's original idea of having an uncensored platform, but the execution is not good. My best wishes to everyone at Twitter (and other tech companies) who have lost their jobs recently. The job market was crazy-good for a few years, and now I expect it to more like after the 2000 dot-com-crash, at least for a few years. I advise people to take a different approach to managing their careers. As fantastic as online (often free) classes are for teaching useful stuff like machine learning, front end development, etc., this has also greatly expanded the global talent pool. Now more than ever, I advise learning through doing your own projects. I have literally done this myself for the last 40 years: I spend my own time experimenting with tech that both fascinates me and might be u...

Not really retired 😀

I read with some humor my last blog post from 6 months ago, saying that I was retired. Ha! As I mentioned 6 months ago, my wife has chronic health problems but those have stabalized, and life is now fairly good. I did start a very much part time (15 hours/week) advisory gig with Mind AI about 4 months ago. Enjoyable work on an interesting product. I am not performing any substantial development work, rather spending most of my time as an architect and advisor. I am 71 years old, and leaving heavy lifting development work to younger and more energetic co-workers is for the best. I have a huge backlog of writing projects, mostly on hold for the moment because of my work at Mind AI: A new book "Artificial Intelligence Programming in Python: Exploring the Boundaries of Deep Learning, Symbolic AI, and Knowledge Representation". Edits for my Common Lisp book, adding new examples. Ideas and some new code for my Swift AI book. For the last year I have been basically giving a...

I retired yesterday: my list of things to do in retirement

What does an intelligent person do in retirement? That is a question of individual tastes but I will share my list of 20 things. Yesterday was my last day working on a recommendation model at Babylist. Babylist is a great company to work for but I decided to retire in order to spend more time helping my wife who now has chronic health problems. When I write books, my wife enjoys editing my work so we will keep doing that. I also plan on being a gentleman computer scientist by working on open source deep learning applications and semantic web/linked data tools and applications. I may end up not doing all of these things, but generally I plan on spending more time on current interests and starting some new hobbies: Retirement Activities Join an Internet Chess club **DONE** Get a fishing license Video Games Improve my cooking/recipe web site Reading Release new editions for my 3 most popular eBooks Practice guitar, Native American Flute, and didgeridoo Eco-b...

My productivity hacks

Like most people, there are many more things that I would like to do than I have time to do. I believe that learning to identify action items that simply should not be done is valuable, but not easy to do. I am mildly attention deficit in the sense that I can only think about or attend to one thing at a time. For a computer scientist, this has been a super power, but is personally inconvenient. I keep 3 TODO lists: TODO high priority - I tend to have 1 to 3 things on this list. I time box my activities so this list is the actions that I will rotate through. I usually work in 60 to 90 minute sprints, but for deep coding this may be 3 or 4 hours. TODO open - everything that I would like to do, but a lot of stuff on this list gets deleted with no further effort (the all important decisions on what not to do). TODO done - instead of deleting completed actions on "TODO high priority" I cut the text and paste the action text to the top of this list. I really like the F...