LLM, DLT, Agents and Ethical Training Data.

DATA INTEGRITY. 

Who (or what) is feeding the exponential LLM tip of AI? 

The Race to The Top...? or bottom...? 

READ SAIL for informed, long term, well researched insights

Here is the latest news on Agents, AI, LLM and resources on agents from SAIL. 

I HIGHLY recommend you read and subscribe to the newsletter.

Invaluable, grounded and deep research on substantive ethical LLM models and instructional design.

🠋🠋🠋

Welcome to Sensemaking, AI, and Learning (SAIL) by George Siemens

Likely no trend in AI will be more substantive this year than agents. After significant advancement in the capabilities of foundation models, application of those models as part of a value ecosystem (including approaches like LLMs as an operating system) are the next logical step.

I humbly recommend educators/faculty/admin/designers get comfortable with agents as both a concept and a technology.

Agents have a long history in AI, but were best explained in Russell & Norvig’s seminal text Artificial Intelligence: A modern approach. They defined agents as being able to “operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals”. Andrew Ng somewhat aligns with this, but focuses less on autonomy and agent selecting goals. Most of what we see described as agents is generally just a tool wrapped around GPT/Gemini/Claude models and would more likely just be described as prompts.

Before diving into agents, here is an existential thread worth thinking about. Highlights include: “The world isn't grappling enough with the seriousness of AI and how it will upend or negate a lot of the assumptions many seemingly-robust equilibria are based upon…Then it will force changes in philosophy. What are we here for? Why do we do the things we do? If everything we care about is automatable, what is our role in the world?”

A few resources to do an Agent dive so you’re ready for what’s coming our way in higher education.

  • Google has a whitepaper from Sept 2024 on agents. If you read one resource, make it this one. They detail tools, orchestration, models vs agents, etc. The section on data stores will be particularly helpful for education (i.e. curriculum and learner data).

  • The Agent Stack. “The agent software ecosystem has developed significantly in the past few months with progress in memory, tool usage, secure execution, and deployment, so we decided it was time to share our own “agent stack” based on our own learnings from working on open source AI for over a year and AI research for 7+ years.”

  • The corporate sector is all in on agents. Salesforce has launched agent force. Microsoft says you’ll have a team of agents working for you by this time next year.

  • And what kind of tasks can real world agents do? Here’s a detailed list. They offer a benchmark for real world tasks and track how well agents achieve them. Interesting to note two areas of common agent failure: social skills and common sense. Success rate at tasks is somewhat low - only 24%.

  • Princeton lead a workshop on agents earlier this year. The recording is here. The discussion on infrastructure for agent development is particularly important.

  • Microsoft has launched Magentic One - A multi-agent orchestration framework. It offers a good visual of how agents (coder, websurfer, filesurfer) are called and orchestrated.

  • In higher education, how will we design for agents? A few thoughts together with a colleague Mihnea Moldoveanu: Interactionalism: Re-Designing Higher Learning for the Large Language Agent Era

  • When worlds collide. A MOOC on LLMs. Scroll down for an excellent series of lectures. Several of the weeks are specifically focused on agents and agentic frameworks.

  • Automated design of agentic systems. Excellent. Great overview visuals early in the doc. Practical search agent example.

  • AI Agents that Matter. Authors posit that agents require different evaluations/benchmarks from existing LLM evaluations.

  • smolagents. Huggingface offers a library to build agents with only a few lines of code.

  • Devin was a heavily touted coding agent in early 2024. When it released toward the end of the year, it dropped with a $500/mo price. Open Hands is an open model in response.

  • State of AI Agents. Accessible overview of the state of agents in organizations today.

General AI News

  • AI will lead to abundance. So says Ray Kurzweil

  • The investment in AI from big tech and VCs remains completely insane. Microsoft will drop $80b in 2025 on data centers. There are only a few companies and countries that have the capacity to invest at this level. And they will own the future.

  • Building large language models. Short Stanford lecture. Nice overview

  • How might LLMs store facts. This is excellent. 20 min video

  • The Prompt Report. A survey of prompting techniques. A good section on non-English prompting as well.

  • Random point: if you’re on Twitter, Grok is one of the best LLM implementations that I have seen in adding value to an existing platform. Meta cancelled their AI character program. There is a lesson here for higher education: Adding AI into an existing platform is fraught with user pushback.

  • AI Engineer 2025 Reading list. This is gold.

  • Altman says OpenAI knows the path to AGI. “We are now confident we know how to build AGI as we have traditionally understood it.”


The Future is Already Here, Just Unevenly Distributed. 

'We Are The Machine We are Building...' circa 2001


In summary Charles states the obvious. 

"Global regulatory alignment is crucial to prevent fragmentation and establish universal standards. Governments, enterprises, and civil society must collaborate to develop governance frameworks that prioritize public interest. DAOs, too, must evolve to provide flexible, collective oversight as AI technology advances. This is not the time for complacency. If action isn’t taken now, AI’s risks will grow unchecked, leaving us powerless to address them. The future of ethical AI depends on bold decisions today. DLT can be the foundation for this future—transparent, accountable, and aligned with humanity’s best interests." 

I recommend you click through and read the full article. 

An important summary of how immutable transparency of BIG DATA feeding layer 1 DLT (like @HEDERA) will assist ethical agents, and therefore the QUALITY and TRUSTABILITY of AI 

We have already seen a proliferation of "upgrades" in 2024 and AI is now mainstream accessible in a rapidly changing and highly competitive market. 

Interoperability, trust, transparency and the vexed question of "ethical AI agents" of putting who's "humanity" first in this debate, is nothing new. It is now just far more critical as the LLM landscape rapidly diversifies and quantum computing combined with AI and DLT converge and become a new universal "tech" standard. 

And foundational ethics in AI agents is nothing new, it has just become more important recently as AI moves rapidly into mainstream. Who monitors what data AI is fed on? Who says so? Can the data be transparently seen? Or trusted? or "ethical", whatever that may mean in diverse global contexts? As usual, more questions than answers and that is the double edged paradox. Will MS as an 80 billion dollar investor (as George in SAIL points out) simply become the "ethical agent AI data" gate keeper?  Can anyone else compete or invest or "keep up ethically" if that becomes an AI investment entry price? Who (or what) ultimately may control more of the AI we become? No answers at all, just curiosity as an obvious convergence of these fundamentally humanity changing issues, emerge. 

My grandkids might even want to know one day... "back in the day..." when the machinations of the AI self drive auto mobile replaced the true natured heart and soul horse powered... maybe, maybe not? 

As Charles posits (and the researched depth of SAIL) states in multiple ways, ethical agents (LLMs) feeding AI needs focused collective attention and consideration by people far smarter than any ONE of us. AI is here to stay (and is not going to go away, even if you ignore it) so what learning considerations MAY we apply? Can DLT (in either traditional, now legacy blockchains or more modern directed acyclic graphs, like hashgraph) as Charles asks, assist in how LLM agent data is chosen, washed and fed into the AI machine we have been building for quite some time? 

About George 

Connectivism Learning 

Professor George Siemens researches networks, analytics, and human and artificial cognition in education. He has delivered keynote addresses in more than 35 countries on the influence of technology and media on education, organizations, and society. His work has been profiled in provincial, national, and international newspapers (including NY Times), radio, and television. He has served as PI or Co-PI on grants with funding from NSF, SSHRC (Canada), Intel, Bill & Melinda Gates Foundation, Boeing, and the Soros Foundation. He has served as a collaborator on international grants in European Union, Singapore, Australia, Senegal, Ghana, and UK. He has received numerous awards, including honorary doctorates from Universidad de San Martín de Porres and Fraser Valley University for his pioneering work in learning, technology, and networks. He holds an honorary professorship with University of Edinburgh. Professor Siemens is a founding President of the Society for Learning Analytics Research. In 2008, he pioneered massive open online courses (sometimes referred to as MOOCs).

LIGHT HEARTED 💕 LIFE’S SENSUALITY

Share With Me A ...

I Know No Thing...

HEART CONNECTED NOT MIND ATTACHED

IT ALL BEGINS IN THE…

CLASSIC MOMENTS.