{"id":13592,"date":"2025-05-28T12:22:06","date_gmt":"2025-05-28T10:22:06","guid":{"rendered":"https:\/\/www.main-vision.com\/richard\/blog\/?p=13592"},"modified":"2025-05-28T12:22:06","modified_gmt":"2025-05-28T10:22:06","slug":"playing-with-ai-agents","status":"publish","type":"post","link":"https:\/\/www.main-vision.com\/richard\/blog\/playing-with-ai-agents\/","title":{"rendered":"Playing with AI Agents"},"content":{"rendered":"<span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 2<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span><p>Yesterday evening I was at a talk about AI. The person had written an AI agent that scrapes news sources for information, and reads reports, and then provides a summary of this information as a report.<\/p>\n<p>We often think of AI as a chat where we ask a question, and then get an answer. This does work well, when you do something once, or twice. If you do something a dozen times per day then agents become interesting.<\/p>\n<p>Both Le Chat by Mistral and Gemini by Google have agents that are focused on specific roles. These include a data analyst, a personal tutor, a universal summariser and a writing assistant for Le Chat. Gemini has something similar.<\/p>\n<p>The idea is that, with an AI agent, you load a set of presets into the LLM to get it to behave a specific way, to give answers in a specific format.<\/p>\n<p>With Le Chat you can click customise to see what the AI is told it is, and you can tweak it to be specific to what you want.<\/p>\n<p>For an <a href=\"https:\/\/chat.mistral.ai\/chat\/6df683cb-9013-4a43-84c0-3a5a6e0cd39f\">example conversation<\/a> here is a sample conversation with the AI chat agent.<\/p>\n<p>The instructions for the agent are:<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/main-vision.com\/images\/agent.png?ssl=1\" alt=\"Example Agent Instructions\" \/><\/p>\n<p>In practice a web form or a template would achieve the same result.<\/p>\n<p>If you find yourself giving the same instructions in an LLM chat, then using agents will save you time and effort. With a few lines of code you can tell an LLM about the role, the things it should know about, and how you want the answer to be formatted.<\/p>\n<p>This can save you time, as well as save you having to repeat yourself.<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 2<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>Yesterday evening I was at a talk about AI. The person had written an AI agent that scrapes news sources for information, and reads reports, and then provides a summary of this information as a report. We often think of AI as a chat where we ask a question, and then get an answer. This [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":12268,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":3,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"federated","footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[255],"tags":[6887,5059,6889,6888],"class_list":["post-13592","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-swiss-walks","tag-agent","tag-ai","tag-behaviour","tag-instructions"],"jetpack_publicize_connections":[],"_links":{"self":[{"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/posts\/13592","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/comments?post=13592"}],"version-history":[{"count":1,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/posts\/13592\/revisions"}],"predecessor-version":[{"id":13593,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/posts\/13592\/revisions\/13593"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/media\/12268"}],"wp:attachment":[{"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/media?parent=13592"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/categories?post=13592"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.main-vision.com\/richard\/blog\/wp-json\/wp\/v2\/tags?post=13592"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}