Thistle Gulch Simulation - v0.5

Thistle Gulch Simulation is an open-source Muti-Agent Gym Environment (MAGE). Designed for creators, researchers, and AI enthusiasts alike, our simulation platform offers unprecedented control over the narrative and character interactions through a powerful Python API. This allows for deep customization of AI decision-making processes and conversation generation. Whether you're looking to simulate social interactions, or explore the boundaries of AI-driven storytelling, our goal is to offer the tools and flexibility to bring your visions, and these AI characters, to life.

Here’s the latest release notes for v0.4 and v0.5. Check them out and join the discord community!

Thistle Gulch Simulation v0.5 includes the following dependencies:

A note on versioning: Moving forward, the Bridge version is the main version to track (0.5) when we say “Thistle Gulch Version” , instead of the Runtime (1.48) to keep things simpler. We’ll also call the entire suite “Thistle Gulch Simulation/Sim”

New Demos

Our current crop of demos are very brief and exist to show how to use the API, but we’re working on more comprehensive tutorials that still use the API under the hood. Since the API is becoming more powerful, a better experience is possible. For example, try the new Tutorial demo or watch the video below to see what it looks like.

CTO Frank Carey walks through the new Tutorial Demo.

General improvements in the demos:

  • LLM output is streamed to the console by default now,
    token-by-token, for easier debugging and visibility.

  • Updated Default Demo now a brief Tutorial (see above)

  • Description of each demo is shown after you select it
    with an option to back out and select another.

  • CLI demo options now validated

  • Press ENTER to see options when prompted in demos.
    Example, listing all the characters and their backstories.

Claude 3 Demo

CTO Frank Carey Explores Claude 3 with Thistle Gulch

With new open models coming out all the time, another demo in the “Custom Models” section was added that allows testing Claude 3, given you have an Anthropic API key. This video took a look at the 3 versions of Claude 3, Haiku, Sonnet, and Opus in the context of Thistle Gulch.

More API Demos

Each of the new APIs below have new demos available as well to show how they work.

New Python APIs

Each release, more Runtime APIs are opening up via the python Bridge. Already, you can pause the simulation and change character details and actions, but this release adds a lot more control. To see all of the APIs and their documentation, check the github repo.

New Event Handlers

In addition to on_ready() [start of sim] and on_tick() [each tick of the sim] events, three new powerful events allow you to customize and control the simulation.

on_action_complete() is triggered when an NPC has concluded their current action and allows your code to react by specifying a new action of your choosing or just storing that information for other purposes like storing a history of NPC actions. By specifying a follow up action, you can have the characters follow a scripted series of actions, one after the other. Or, you can reuse the SAGA ActionAgent to create your own LLM action generations, with your own prompts, on any character at any time.. it’s very powerful! We use it extensively in the new “default demo” tutorial if you want to see it in action.

on_event() is similar, but more generic. In fact, “action-complete” is just a specific event type. on_event() allows you to capture any “new style” events in one handler. It’s currently used to get modal option information returned back, but expect to see it in even more places very soon.

on_error() is a new type of event that allows your python code to detect and handle errors that happen on the Runtime. Common errors might be an invalid action option generated by SAGA. Currently, these are just logged on the Bridge, which is also helpful for debugging.

Modals and Option Selection

Another powerful tool for making interactive content is giving the user a notification, and options to select from.

This is already used heavily with Action Options generated via SAGA being presented to the user to select what they want the character to do next. Now, the Bridge has full access to the modal API as well. Simply call the API to create a new modal with a title, description, and a list of button titles to be provided to the user as seen in the screenshot with one option called “Start Tutorial.

Links are supported via the <link=URL></link> markup as well as <b> and <u> tags, and there is a parameter if the modal should pause the simulation or not.

Once the user responds, the selection is returned to the on_event() callback, but there’s actually an even easier method using “asyncio.Futures”… see the next section.

asyncio.Futures

When triggering something and wanting to wait for it to complete, it can be a pain to use callbacks. Instead, two APIs now support passing in an asyncio.Future object. For modals, you can “await” that future and then use the result to get the choice the user made before your code proceeds. Similarly, override_character_action() will take a future that will be awaited until the action completes, making it easy to string together a series of 3 actions in 3 lines of code. See the new Tutorial demo code for examples of how to use it.

Memories API

NPCs have memories that are stored in their history panel, but are also used for generating actions and conversations. Until now, these memories were locked, but now you can create and delete memories via the API and even clear an NPCs memory and start over completely. Now, it’s finally possible to create your own scenarios like the current murder mystery or refine that scenario with additional detailed memories in addition to changing character backstories.

Custom Conversations

Instead of using the default SAGA “generate-conversation” system to create what characters say to each other, custom conversations can now be specified when you use the “converse_with” skill. This is useful when looking to ensure a specific dialogue happens or to save time generating one. This only affects that single action, so it’s easy to have a baked conversation between characters followed by the LLM doing the rest.

Control the Camera within the Runtime via Python Code

Camera Controls

Previously, there was no way to control the Runtime’s camera. That meant it was easy to miss out on actions or events that were triggered from code since the user didn’t know who they should be following or focused on at any given time.

Now, there is a simple API for follow_character() [locking the camera to a character], and focus_character() [showing the character detail panel] via python code. This helps make the demos better but is also another powerful tool in the toolbox of any creator to focus a user (including themselves) on the action.

For the real power-users looking to make something more cinematic, a new place_camera() API that allows you to set a cinematic camera at a specific world coordinate along with the camera’s Field of View and rotation. There is a demo for this you can try out, but we’re still working on a tool to make getting these details from within the cinematic camera mode of the Runtime.

World and Persona Contexts

Previously, the world context, things like locations, the list of all the characters, etc.. were only available by requesting a character’s context. That’s now split into two API calls. Persona Context still includes the World Context for convenience, but you can also fetch just the World Context itself. Persona Context also includes more information now, like the NPC’s destination and current location in addition to their character data, observations, energy levels and the like. Location objects now have more details like bounding boxes and Vector3 locations in addition to their ID and descriptions.

Other Improvements

  • Speech Bubbles are in “overlay” mode so geometry doesn’t block them. This is only for characters who are “focus” enabled or are being “followed” so the camera is locked to them. Geometry will still block speech bubbles of other characters.

  • Documentation is greatly improved. See the docstrings in the repo, but the demos now have docstrings as well. If you haven’t seen it yet, also check out the WIKI.

  • OpenAI is no longer required. It’s still the default and the one we recommend starting with, but you can try any LLM you like. See the Claude3/Antropic demo or the Ollama demo for examples.

Frank Carey

Fable CTO

Previous
Previous

Thistle Gulch Simulation - v0.6

Next
Next

Thistle Gulch Simulation - v0.3