Skip to content

Runtime prompt/manual lookup to save context length #6

@david-andrew

Description

@david-andrew

Problem

There needs to be some way for the LLM to lookup documentation for a particular tool at runtime rather than including all the information in the initial prompt. When there are lots of tools, or the tools have complex behavior with long prompts, this takes up lots of the limited input context available in GPT-4 (currently 8k tokens).

The current approach of stuffing everything into the initial prompt has the following drawbacks:

  • expensive (openai costs per token)
  • can confuse the LLM. If there are too many tools with long descriptions, the LLM may be less likely to hone in on the section it needs to complete its current task
  • decreases the amount of interactions the user can have with the agent. GPT-4/etc. have a finite input length, so the longer the initial prompt, the less space for the user's actual conversation with the LLM

Approaches / Open Questions

  • have a manual tool that the LLM can call with the name of a tool it wants more info on
    • where does the more info live?
    • how does the prompt generation know to add a note that more info can be looked up? (don't want to require the user to manually say that more info can be looked up)
  • what about with the PythonTool or similar? Want to be able to pass in local variables, that the model could presumably lookup the values of. This is semi-handled by the pyman tool: https://github.com/jataware/archytas/blob/master/archytas/tools.py#L134, though the prompt generated for the PythonTool doesn't update to indicate that the locals exist, or that information about them can be looked up

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions