Thistle Gulch Simulation - v0.7+v0.8

Here’s the latest release notes for v0.8. Check them out and join the discord community!

Thistle Gulch Simulation v0.8 includes the following dependencies:

First “Story” Demo - A Christmas Carol

The #story room in Discord came up with some ideas for stories to tell in Thistle Gulch. For the first one, we started with a simple idea that it’s Christmas Eve and Sarah Brown and Rev Blackwood want to recruit some townsfolk to sing a carol that evening. They discuss which character they should persuade and which of them should do the recruiting. The conversation generates a plan via LLM, which is then executed. They do this planning-executing loop over and over until 6pm, when the carol finally begins. All successfully recruited characters meet at the church grounds and sing.

This is just the beginning of more advanced, community driven stories that really show off what can be done with Thistle Gulch and SAGA. Stay tuned for a whole writeup and video on how this story was accomplished and new stories the community is creating.

Run Demos directly via Itch App UI

(Windows Only) run_demos.py is now built into an exe called thistle-gulch.exe and is included in the Windows Runtime builds that are sent to Itch. That significantly reduces the set up steps for people just looking to get started with the demos. This actually came out last release, but it’s worth reminding folks.

What’s new for everyone using the Itch App to install the Runtime, there are also buttons for Discord and the README quick-start as well that will open a link in your browser.

Customize Characters in personas_config.yaml

Want to customize the characters? For instance, changing the saloon owner’s backstory to make him scared of spiders and give him a memory of a tragic run in with a nest of spiders? Now you can do that without writing code using the new personas_config.yaml. The default config file already includes the details about each character in Thistle Gulch. This makes it easy to see each character’s details and experiment changing a character’s backstory, memories, and more.

Character Memories can be removed or changed or added

Also in personas_config.yaml, all character memories are able to be overridden, removed, or added to. The format provides the summary, timestamp and optionally a location, and list of entity ids (sim objects). Importance weight is used to filter memories with 1 being the default. Use 10 to make sure the character always recalls that memory when generating conversations and actions.


personas_config.yaml also supports enabling/disabling the automatic actions and conversations generation by default. Code in Demos and Stories, OR using the UI toggle allows overriding these defaults via the API, but for now no characters have either conversations or actions generated for them without changing this file, putting users more in control of costs.

GPT-4o is now the default for Conversations

OpenAI dependencies have been upgraded to allow for GPT-4o - which is twice the speed and half the cost of GPT-4-turbo, while still achieving good results. We’ve made it the default for conversations in this release, reducing the default costs. A bug was also fixed that was causing the model to be overridden unexpectedly.

New bridge_config.yaml.

The config file stars off with some basics you would normally have to provide as command line args.

- host: "localhost"
  port: 8080
  cors: "*"
  runtime_path: ""

LLM configuration

Fable SAGA no longer has the concept of “model_override” or a package default model or temperature. All config is either handled within bridge_config.yaml or needs to be configured specifically in code.

For instance bridge_config.yaml, enables experimenting with Ollama, Claude, and more in all the demos instead of the OpenAI default. By setting the import and class any supported Langchain Model that inherits from BaseLLM should now be supported without changing code. In addition to the model, params like temperature, streaming, and json formatting are there as well.

LLM Debugging

It’s also now easier to debug LLMs via bridge bridge_config.yaml using the following debug options with each model (conversation or action LLM):

  • debug_prompt: Prints the prompt being used by that model before generating.

  • debug_response: Prints the entire response object after generating.

  • debug_info: Prints the model info only (if it exists - currently only OpenAI models have this), which includes token usage, costs, etc.

Cost Counter Improvements and API

Costs are now calculated within fable-saga for Action and Conversation generation and that information is passed along to the Runtime to be displayed in the cost counter. That allows the new GPT-4o (omni) to have it’s costs calculated for instance. Only OpenAI cost calculation is supported right now, but other vendors should be easy to add with a custom callback.

You may want to use an LLM outside of the SAGA requests and have those costs reflected in the UI. There is a new API for doing that called api.add_cost().

Conversation Prompt Fixes

Conversations are now formatted as a list of ConversationTurn objects {<persona_guid>, <dialogue>}. With the previous schema, OpenAI still generated using this format about 10% of the time, causing the parsing to fail. The new format also saves tokens in the prompts and when generating conversation turns. The default prompt from the Runtime was also malformed in some places, like conversations, memories, and character metadata, leading to more prompt tokens than were necessary, so that’s been improved as well.

Conversation parsing can still run into errors (generative AI isn’t perfect yet), but now Runtime parse errors will be returned to the bridge to either be handled, or logged.

API Improvements

Focus API Adds Collapsed ‘open_tab’ Option

When using api.focus_character() to show a character’s detail panel, there is a new COLLAPSED option that shows the small version of the panel. It can be expanded by clicking on the button, but this makes it easier to see dialogue.

Update multiple character properties at once via API

api.update_character_properties() now takes a list of properties and values so it only takes one request to update multiple properties at once. For example, wanting to update energy, backstory, summary all in one call.

WorldContext object passed to on_ready()

WorldContext contains all of the character details, conversations, memories, locations, simObjects and more. It’s super handy, so it’s automatically provided by the Runtime when on_ready() is called. It’s useful for instance to disable all characters in a story or moving characters to a specific location before starting the story as is demonstrated in the “Christmas Carol” story.

Memory positions can be None

Memories no longer need a specific Vector3 for their location. If it’s None, then it doesn’t have a specific location.

Frank Carey

Fable CTO

Next
Next

Thistle Gulch Simulation - v0.6