Integrate Superagent with Astra DB Serverless

query_builder 20 min

This Superagent integration with Astra DB Serverless allows you to build, manage, and deploy unique AI Assistants. By following the steps in this topic, you can use Astra DB Serverless as your backend vector store for Superagent data.

Prerequisites

To complete this tutorial, you’ll need the following:

Connect to the database

  1. In Astra Portal, under Databases, navigate to your database.

  2. Ensure the database is in Active status, and then select Generate Token. In the Application Token dialog, click content_paste Copy to copy the token (e.g. AstraCS:WSnyFUhRxsrg…​). Store the token in a secure location before closing the dialog.

    Your token is automatically assigned the Database Administrator role.

  3. Copy your database’s API endpoint, located under Database Details > API Endpoint (e.g. https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com).

  • Linux or macOS

  • Windows

export ASTRA_DB_API_ENDPOINT=*API_ENDPOINT* # Your database API endpoint
export ASTRA_DB_APPLICATION_TOKEN=*TOKEN* # Your database application token
export OPENAI_API_KEY=*API_KEY* # Your OpenAI API key
set ASTRA_DB_API_ENDPOINT=*API_ENDPOINT* # Your database API endpoint
set ASTRA_DB_APPLICATION_TOKEN=*TOKEN* # Your database application token
set OPENAI_API_KEY=*API_KEY* # Your OpenAI API key

Set up your environment

Create a Python environment and install the needed requirements.

conda create --name superagent-demo python=3.10
conda activate superagent-demo
brew install poetry

Set up Superagent

You can deploy Superagent on Replit. It’s an online platform that provides an IDE for coding and collaborating on projects in various programming languages. It’s designed to be an accessible and user-friendly platform for both beginners and experienced developers. Replit has different usage plans. The free plan includes basic features for coding, collaborating, and running code in the IDE. Replit also offers a paid plan, which allows you to deploy their applications and make it publicly accessible.

Follow the steps for deploying Superagent on Replit in the Superagent documentation. Then return to this topic’s instructions.

You should create a .env file with values for:

JWT_SECRET="superagent"
VECTORSTORE="astra"
ASTRA_DB_ENDPOINT="API_ENDPOINT" # Your database API endpoint
ASTRA_DB_APPLICATION_TOKEN="TOKEN" # Your database application token
ASTRA_DB_COLLECTION_NAME="YOUR_ASTRA_DB_COLLECTION" # Your database collection name
ASTRA_DB_KEYSPACE_NAME="YOUR_ASTRA_DB_KEYSPACE" # Your database keyspace name
OPENAI_API_KEY="API_KEY" # Your OpenAI API key
SUPERAGENT_API_KEY="API_KEY" # Your Superagent API key
MEMORY_API_URL=https://memory.superagent.sh
TZ="Etc/UTC"

Superagent API key

If you do not already have a Superagent API key, use the api-users endpoint provided by the Superagent API for your logged-in account on superagent.sh. For detailed instructions on managing API keys, including creating and using them securely, refer to the Superagent documentation.

Use Astra DB Serverless with Superagent

This tutorial uses a PDF file for demonstration purposes:

In the Python code example, you can specify a different source document.

  1. Configure a Large Language Model (LLM) using OpenAI.

    llm = client.llm.create(request={
     "provider": "OPENAI",
     "apiKey": "<YOUR_OPENAPI_KEY>"
    })
  2. Create an agent; also known as an assistant. Because the LLM doesn’t understand when to trigger the datasource, the code prompts the user for a question that’s used to query the datasource.

    agent = client.agent.create(request={
        "name": "Chat Assistant",
        "description": "My first Assistant",
        "avatar": "https://myavatar.com/homanp.png",
        "isActive": True,
        "initialMessage": "Hi there! How can I help you?",
        "llmModel": "GPT_3_5_TURBO_16K_0613",
        "prompt": "You are a helpful AI Assistant, use the Tourism trend to answer any questions.",
    })
  3. Attach the LLM to the agent.

    client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
  4. Create a datasource. In this step, the code reads an identified PDF (tourism trends), processes and encodes the string content, and inserts the data into the Astra DB Serverless database that’s identified in the .env file.

    datasource = client.datasource.create(request={
        "name": "tourism trend",
        "description": "demo pdf doc from internet",
        "type": "PDF",
        "url": "https://cor.europa.eu/en/events/Documents/NAT/Tourism%20-%20new%20trends,%20challenges%20and%20solutions%20-%20background%20note.pdf"
    })
  5. Add the datasource to the agent:

    # Connect the datasource to the Agent
    client.agent.add_datasource(
        agent_id=agent.data.id,
        datasource_id=datasource.data.id
    )
  6. Invoke the client. The example asks a specific question that is relevant to the PDF file’s tourism content.

    prediction = client.agent.invoke(
        agent_id=agent.data.id,
        input="summarize the tourism trends",
        enable_streaming=False,
        session_id="my_session" # Best practice is to create a unique session per user
    )

Here’s sample output:

superagent output

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com