• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Serverless Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting private endpoints
        • AWS Private Link
        • Azure Private Link
        • GCP Private Endpoints
        • Connecting custom DNS
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Additional resources
        • Glossary
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Bring Your Own Key
          • BYOK AWS Astra DB console
          • BYOK GCP Astra DB console
          • BYOK AWS DevOps API
          • BYOK GCP DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Export metrics to third party
          • Export metrics via Astra Portal
          • Export metrics via DevOps API
        • Manage access lists
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing BYOK AWS
        • Managing BYOK GCP
        • Managing access list
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • DataStax Astra Block
      • FAQs
      • About NFTs
      • DataStax Astra Block for Ethereum quickstart
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Serverless Documentation
  • Planning

About your Astra DB database

Welcome! Let’s cover some basics and review how you can get connected.

Your paid database starts the following specifications:

  • A single region

  • A single keyspace

  • Storage based on your selected plan

  • Capacity for up to 200 tables

  • Replication factor of three to provide optimal uptime and data integrity

To better understand your database capabilities, review the Astra DB database guardrails and limits.

Astra DB plan options

Serverless databases

DataStax Astra DB offers a serverless database, which is an elastic cloud-native database that scales with your workload. You can scale your compute and storage capabilities independently, allowing you to focus on developing your application and less on your infrastructure costs.

Astra DB offers three plans: Free, Pay As You Go, and Enterprise (annual commitment). Each plan is billed according to the Astra DB pricing, which is based on the Read and Write requests, data storage, and data transfer. For more, see DataStax Astra DB Pricing.

Limitations

Serverless databases do not support materialized views or VPC peering. Serverless does offer private endpoints. For additional guardrails and limits, see Guardrails and limits.

Database regions

When creating a database, select a region for your database. Choose a region that is geographically close to your users to optimize performance.

If you are adding multiple regions to your database, you can use each region only once. You cannot add the same region to the same database more than one time.

Serverless database regions

Google Cloud

Region Location Pricing

us-east1

Moncks Corner, South Carolina, US

Standard

us-east4

Ashburn, Virginia, US

Standard

us-west1

The Dalles, Oregon, US

Standard

us-central1

Council Bluffs, Iowa, US

Standard

northamerica-northeast1

Montréal, Québec, Canada

Standard

us-west4

Las Vegas, Nevada, US

Standard

europe-west1

Saint-Ghislain, Belgium

Standard

europe-west2

London, England

Premium

asia-south1

Mumbai, India

Standard

australia-southeast1

Sydney, Australia

Premium

AWS

Region Location Pricing

us-east-1

Northern Virginia, US

Standard

us-west-2

Oregon, US

Standard

us-east-2

Ohio, US

Standard

eu-central-1

Frankfurt, Germany

Standard

eu-west-1

Ireland

Standard

ap-south-1

Mumbai, India

Standard

ap-southeast-1

Singapore

Standard

ap-southeast-2

Sydney, Australia

Premium

ap-east-1

Hong Kong

Premium

sa-east-1

São Paulo, Brazil

Premium Plus

Azure

Region Location Pricing

westeurope

Netherlands

Premium

francecentral

Paris, France

Standard

westus2

Washington (state), US

Premium

eastus2

Virginia, US

Standard

canadacentral

Toronto, Ontario, Canada

Standard

eastus

Washington, DC US

Standard

australiaeast

New South Wales (Australia)

Premium

centralindia

Central India (Pune)

Standard

How do you want to connect?

Options Description

I don’t want to create or manage a schema. Just let me get started.

Use schemaless JSON Documents with the Document API.

I want to start using my database now with APIs.

Use the REST API or GraphQL API to begin interacting with your database and self manage the schema.

I have an application and want to use the DataStax drivers.

Initialize one of the DataStax drivers to manage database connections for your application.

I know CQL and want to connect quickly to use my database.

Use the integrated CQL shell or the standalone CQLSH tool to interact with your database using CQL.

Astra DB database guardrails and limits

DataStax Astra DB includes guardrails and sets limits to ensure good practices, foster availability, and promote optimal configurations for your databases.

Astra DB offers a $25.00 free credit per month, allowing you to create an Astra DB database for free. Create a database with just a few clicks and start developing within minutes.

Each plan includes a $25.00 free credit per month. The $25 credit is good for approximately 30 million reads, 5 million writes, and 40GB of storage per month on a serverless database.

Limited access to administrative tools

Because Astra DB hides the complexities of database management to help you focus on developing applications, Astra DB is not compatible with DataStax Enterprise (DSE) administrative tools, such as nodetool and dsetool.

Use the DataStax Astra Portal to get statistics and view database and health metrics. Astra DB does not support access to the database using the Java Management Extensions (JMX) tools, like JConsole.

Simplified security without compromise

Astra DB provides a secure cloud-based database without dramatically changing the way you currently access your internal database:

  • New user management flows avoid the need for superusers and global keyspace administration in CQL.

  • Endpoints are secured using mutual authentication, either with mutual-TLS or secure tokens issued to the client.

  • TLS provides a secure transport layer you can trust, ensuring that in-flight data is protected.

  • Data at rest is protected by encrypted volumes.

Additionally, Astra DB incorporates role-based access control (RBAC).

See Security guidelines for more information about how Astra DB implements security.

Replication within regions

Each Astra DB database uses replication across three availability zones within the launched region to promote uptime and ensure data integrity.

Serverless database limits

The following limits are set for serverless databases created using Astra DB. These limits ensure good practices, foster availability, and promote optimal configurations for your database.

Databases

Every organization starts with a maximum limit of five databases. To increase this limit or to completely remove it, you must first upgrade your account to PAYG or higher, and then submit a request to contact DataStax Support.

Columns

Parameter Limit Notes

Size of values in a single column

10 MB

Hard limit.

Number of columns per table

75

Hard limit.

Tables

Parameter Limit Notes

Number of tables per database

200

A warning is issued when the database exceeds 100 tables.

Table properties

Fixed

All table properties are fixed except for Expiring data with time-to-live.

Secondary indexes and materialized views are not available for serverless databases. Our team is working to offer materialized views for serverless databases soon.

Workloads

Rate limiting is not the same as WRU or RRU. Units are counted based on the input/output of the operation. Astra workloads are limited to 4096 ops/sec by default, which literally means 4096 operations (reads or writes). Batches count as a single operation, regardless of the number of operations in the batch. If you see a "Rate limit reached" error in your application and want your limit raised, please open a support ticket.

Storage-Attached Indexing (SAI) limits

The maximum number of SAI indexes on a table is 10. There can be no more than 50 SAI indexes in a single database.

Automated backup and restore

Serverless databases created using Astra DB are automatically backed up once an hour. The hourly backups are stored for 20 days and only include sstables. This backup could exclude data written that is in the memtable and the commit log, but has not been flushed to the sstables.

The commit logs are not included in the hourly backups. Data in memtables that are not yet flushed to sstables is in the commit logs and not available in the hourly backups.

For databases with multiple datacenters in various regions, backup management is only available for the original datacenter selected when you created the database. Restoring from backup removes all other datacenters, restoring to the primary datacenter, and adding the other datacenters with a restored version of the primary datacenter.

If the database was terminated, all data is destroyed and is unrecoverable.

If data is accidentally deleted or corrupted, contact DataStax Support to restore data from one of the available backups. This window ensures that the data to restore exists as a saved backup.

When restoring data, DataStax Support allows you to restore data to the same database, replacing the current data with data from the backup. All data added to the database after the backup is no longer available in the database.

When restoring data, DataStax Support allows you to restore data on the same cluster if the table is dropped, and data is restored in the same cluster using the specified backup.

Cassandra Query Language (CQL)

At this time, user-defined functions (UDFs) and user-defined aggregate functions (UDAs) are not enabled.

Parameter Limit Notes

Consistency level

Fixed

Supported consistency levels: Reads: Any supported consistency level is permitted.
Single-region writes: LOCAL_QUORUM and LOCAL_SERIAL

Multi-region writes:
EACH_QUORUM and SERIAL

Compaction strategy

Fixed

UnifiedCompactionStrategy is a more efficient compaction strategy that combines ideas from STCS (SizeTieredCompactionStrategy), LCS (LeveledCompactionStrategy), and TWCS (TimeWindowCompactionStrategy) along with token range sharding. This all-inclusive compaction strategy works well for all use cases.

Lists

Fixed

Cannot UPDATE or DELETE a list value by index because Astra DB does not allow list operations that perform a read-before-write. Note: INSERT operations work the same way in Astra DB as in Apache Cassandra® and DataStax Enterprise (DSE). Also, UPDATE and DELETE operations that are not by index work the same in Astra DB, Cassandra, and DSE.

Page size

Fixed

The proper page size is configured automatically.

Large partition

Warning

A warning is issued if reading or compacting a partition that exceeds 100 MB.

CQL commands

The following CQL commands are not supported in Astra DB:

  • ALTER KEYSPACE

  • ALTER SEARCH INDEX CONFIG

  • ALTER KEYSPACE

  • ALTER SEARCH INDEX CONFIG

  • ALTER SEARCH INDEX SCHEMA

  • COMMIT SEARCH INDEX

  • CREATE KEYSPACE

  • CREATE SEARCH INDEX

  • CREATE TRIGGER

  • CREATE FUNCTION

  • DESCRIBE FUNCTION

  • DROP FUNCTION

  • DROP KEYSPACE

  • DROP SEARCH INDEX CONFIG

  • DROP TRIGGER

  • LIST PERMISSIONS

  • REBUILD SEARCH INDEX

  • RELOAD SEARCH INDEX

  • RESTRICT

  • RESTRICT ROWS

  • UNRESTRICT

  • UNRESTRICT ROWS

For supported CQL commands, see the Astra DB CQL quick reference.

cassandra.yaml

If you are an experienced Cassandra or DataStax Enterprise user, you are likely familiar with editing the cassandra.yaml file. For Astra DB, the cassandra.yaml file cannot be configured.

The following limits are included in Astra DB:

// for read requests
        page_size_failure_threshold_in_kb =  512
        in_select_cartesian_product_failure_threshold =  25
        partition_keys_in_select_failure_threshold = 20
        tombstone_warn_threshold = 1000
        tombstone_failure_threshold = 100000

// for write requests
        batch_size_warn_threshold_in_kb = 5
        batch_size_fail_threshold_in_kb = 50
        unlogged_batch_across_partitions_warn_threshold = 10
        user_timestamps_enabled = true
        column_value_size_failure_threshold_in_kb = 5 * 1024L
        read_before_write_list_operations_enabled = false
        max_mutation_size_in_kb = 16384

// for schema
        fields_per_udt_failure_threshold = 30 (Classic) or 60 (Serverless)
        collection_size_warn_threshold_in_kb =  5 * 1024L
        items_per_collection_warn_threshold =  20
        columns_per_table_failure_threshold = 50 (Classic) or 75 (Serverless)
        secondary_index_per_table_failure_threshold = 1
        tables_warn_threshold = 100
        tables_failure_threshold = 200

// for node status
        disk_usage_percentage_warn_threshold =  70
        disk_usage_percentage_failure_threshold =  80
        partition_size_warn_threshold_in_mb =  100

// SAI Table Failure threshold
        sai_indexes_per_table_failure_threshold = 10
Connect with Strapi Plan options

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage