The Elysia chlorotica, also known as Sap-Sucking Greeen Sea Slug, are fascinating creatures.
They puncture the cell wall of tidal algae with their radula (tongue), then suck out the contents like a milkshake through a straw. They keep the chloroplasts inside their digestive systems. For the next few months, the Sea Slug absorbs the algae’s chloroplasts as needed while they continue to perform photosynthesis. These provide the host with energy, much like a plant. They also continue converting sunlight into energy, much like solar panels.
The process is called Kleptoplasty – theft of chloroplasts. It is a form of nature extending one organism with another.
ℹ️
Side Note
For those curious about the underlying mechanism, a recent Harvard article called it Stealing a Superpower presents a new theory.
In the 1960s, a group of radical British architects, under the working moniker Archigram, created the concept of The Plug-in City. The core could be extended through standard interfaces into which other building components and services could be plugged in.
They envisioned a city built to change as its parts became obsolete. As older parts were phased out, new parts could be attached, extending the city’s core function.
Change was built into the design.
Glade Plugins were designed to dispense aromatic volatile organic oils into the air. They did this via a patent-pending heating process. A base would be plugged into a wall socket, and a user-replaceable scented oil cartridge could be inserted into the slot. The oils would be heated slowly, releasing the chosen aroma.
Variations included:
Night-lights
Electronic sensors
‘Rest’ mode to save power
After licensing patented technology from Color Kinetics in 2002, LED color-changing effects created ‘a home-fragrance device that combines fragrance with a light show to provide a multi-sensory experience.’
An electrical-resistance heating element is affixed on the inner Surface of the cartridge chamber shallow extension. A thin wick matrix extends internally from the cartridge chamber bottom up to the top of the chamber shallow extension.
The system allowed easy cartridge replacement. You could insert a new one containing a different material. The cartridges did not significantly affect the functionality of the base, but they enabled a variety of user-selected experiences.
This is where we get a bit technical. I’ve tried my best to explain the ideas as plainly as I can, but if these sorts of things are not your cup of tea, feel free to skip ahead. I should warn you that it doesn’t get much less technical until the very last section, where there are cartoon drawings.
In previous sections, we covered AI Assistants, turning user voice into text, then into Intents. These were then invoked, with the result incorporated into the response, which was then converted back into voice. These processes provided a seamless, magical experience for the user.
Some work by adding network APIs that can control the device, then having assistants like Echo, Google Nest, or HomePod relay voice commands to invoke those network APIs. Others are embedded inside and integrated with the device.
What’s important is that the Assistants that run on your phone or standalone hubs are considered a base. You add functionality by adding software or hardware extensions to that base.
Between these two methods, a system could transfer control to a different part of the code, depending on the value of a register. These were building blocks that would be used by more advanced programming languages to create what became known as Subroutines, Functions, or Modules.
💡
Side Note
The Assembly language code shows an early case of transferring control to another code section and receiving the result. In this case, the invocation method and code location were tightly coupled in the same binary code.
Later, we’ll see how these mechanisms could be decoupled at runtime.
Incidentally, Both Dr. Booths worked on developing machine translation and, in 1955, proved that computers could be used for language translation.
French-to-English Translation
Function Calls
In the 1972 report describing the B Programming Language (the predecessor to C which exists today), Ken Thompson (who went on to work on Unix and Go Language) presents the concept of a Library Call. Libraries were pre-defined functions that could access the enclosing operating system and perform tasks not defined in the application code directly.
The concept had been around for some time in early languages such as Algol and FORTRAN. But B and later C allowed the code for the library to be kept in a separate file and only added into the current program in the final stages of compilation.
B Library Call
In 1979, Bjarne Stroustrup introduced the concept of polymorphism and Virtual Function Calls in the design of The C++ Programming Languages. This allowed a Derived Class to Override a method call and redirect the destination function to another one.
This meant that at runtime, what function was actually invoked could be dynamically determined by looking up the actual function in what was called a vtable:
All this meant that the Caller and Callee were no longer tied together and fixed at compile-time. It would have significant implications for allowing applications to be extended and modified, as needed.
Shared Libraries
The concept of dynamic linking goes back to the Multics Operating system in the mid-1960s.
In a multi-processing system, shared libraries allow the same code to be reused by multiple processes, saving memory and performance. Shared libraries often had to be built in a certain way so they would contain position-independent or re-entrant code. This allowed the operating system to load them into any available place in memory at runtime and adjust the code to perform the equivalent of Proceed To Instruction (remember that?)
Shared libraries also introduced the concept of runtime discovery. The operating system would look for the actual code and load the binary library file at runtime. This opened up applications to having multiple different libraries, which they could swap in and out depending on what was needed.
To conserve limited memory, some operating systems could dispose the libraries when no longer needed and reuse the memory address space.
DLL Hell
Microsoft Windows, introduced in 1985, implemented a similar functionality called Dynamic Link Library (DLL). DLLs served the same function as Unix shared libraries for the same reason (conserving RAM). Windows allowed DLL code to be discovered, loaded, executed, and discarded at runtime.
However, DLLs went further than just adding code to a running service. They also allowed features like user-interface drawing, access to hardware, and other common tasks to be built once and shared by multiple executable programs. The original discovery and registry process was simplistic, leading to multiple DLLs overriding each other and what became known as DLL Hell.
This also allowed malicious actors to inject their own DLLs that pretended to behave the same way as a system-shared library, enabling mass dissemination of viruses and malware via DLL Hijacking.
Unix Shared Libraries and early Windows DLLs did not enforce a calling interface between the main application and the shared code. The burden was on the developers and the accuracy of documentation to offer a sort of voluntary ‘handshake’ compliance. If the Caller did not follow the calling scheme, the Callee would crash with a SEGFAULT on Unix and a familiar crash modal on Windows.
If the problem was particularly dire, you might even see the notorious Blue Screen of Death (BSOD).
It happened often enough it even entered popular culture…
Microsoft went on to create a more rigorous way to define the handshake between an Application and its DLLs via a TLB (Type Library) description. This would be compiled and built into the binary shared library. This reduced the number of crashes, but there was still the virus problem.
Much more work had to be done to reduce the incidence of malicious software. Like validating that a verified developer had created a specific code library. The operating system would then enforce the authentication of the binary code. These mechanisms were wrapped by Microsoft under the brand Authenticode:
This started decades of cat-and-mouse games between malware writers and Microsoft, walking a tightrope between user convenience and security.
Apple Mac systems ended up with a similar Code Signing feature, enforced by the GateKeeper subsystem. The first few years after the introduction were painful since the habit of downloading installers from third-party sites was ingrained into the users. Apple eventually shunted everyone to the Mac App Store, where application binaries could be checked for malware and cryptographically signed to prevent tampering.
On the Linux side, consumer-oriented Desktop Linux releases are trying to move users to their own convenient FlatHub and Snap Store. However, on the server side, hacking shared libraries persists to this day, with serious consequences.
Code libraries grouped together are called modules and are supported by many programming languages. Each has its own registry, where anyone can upload their code and provide descriptions to make finding them easier.
These libraries can be loaded at different points in the development lifecycle, but most often they are added or installed during development which downloads them to the developer’s machine, then compiled, packaged, or linked into the finished distribution for debugging or deployment.
However, this can create dependencies on unknown third parties:
To encourage open-source sharing, minimal authentication is done to any of these other than validating a simple checksum to ensure the version uploaded to the repository hasn’t been tampered with before making it to the developer’s machine.
However, that does not account for malicious users hijacking and overwriting a repository with their own versions. This has led to a proliferation of what’s called Supply Chain Attacks
There’s also the problem of over-reliance on extension modules:
The modern software world is built on the foundation of trust, in sharing modules and functionality.
ℹ️
Side Note
The new world of AI has to rely on the same, wobbly foundations. However, it takes the risk up a few notches by adopting specifications that lack essential security features, like Model Context Processor (MCP).
More on this soon.
Attestation
One of the first steps in stopping the growth of malicious software is reliably proving where it came from.
This was a serious problem that could seriously impact device manufacturers like Logic Analyzer maker Salae or OneWheel (Full disclosure: I’m a happy user of the former, and worked on the latter).
Attestation requires building a web of trust on both the provider and consumer sides of services. AI Assistants that want to allow access to third-party tools and services must offer this to limit the blast radius of malicious tool providers.
This problem plagues organizations as buttoned down and strict as the US DoD.
❗
Yikes!
As of this writing (June 2025), none of the AI extension specifications support Attestation.
Congratulations. You’ve built an iPhone Mobile App!
It’s taken months of toiling on design and development. Now, you have to make sure it complies with the Apple App Review Guidelines. These are the rules by which Apple evaluates an app before allowing it to be published in the App Store.
You’re almost past the final hurdle before the public launch.
But then, you see…
2.5.2 - Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps.
No plug-ins. No downloadable extensions.
Hope you read that clause before designing your app!
In case wondering, Google’s Play store also has a submission process. But in Android-land, things are less buttoned-up. In fact, there is a process for delivering on-demand extensions:
Apple blocks dynamic extensions, reasoning that extensions not vetted by Apple can load malware onto a phone. They are absolutely correct on this account. But a side-effect of this heavy-handed approach is that developers can not customize an app at the deep, binary level for individual users.
Again, this is not limited to Apple and iOS. Google, Microsoft, and any company looking to limit access to unvetted third-party software have had to endure the same process.
It boils down to Freedom vs. Safety and Control vs. Openness.
Which side of the issue you land on will undoubtedly color how you perceive what will be ahead in the AI Assistant world.
❗
Soapbox
iOS not allowing dynamically-extended apps is a significant design flaw.
It will hinder the architecture of an open-ended Siri and limit the key benefits of App Intents.
What many people don’t know is that under the hood, iOS is based on BSD Unix, and BSD has long had support for shareable libraries.
This includes the dlopen() (Dynamic Library Open) function, allowing shared libraries to be loaded at runtime. The technology has been there all along, but App Store guidelines have stopped developers from taking advantage of it again and again.