May 18, 2021
With the release of CockroachDB v21.1, we've made a variety of flexibility, performance, and compatibility improvements. Check out a summary of the most significant user-facing changes and then upgrade to CockroachDB v21.1.
To learn more:
- Read the v21.1 blog post.
- Join us on May 19 for a livestream on why multi-region applications matter and how our Product and Engineering teams partnered to make them simple in v21.1.
Downloads
Docker image
$ docker pull cockroachdb/cockroach:v21.1.0
CockroachCloud
- Get a free v21.1 cluster on CockroachCloud.
- Learn about recent updates to CockroachCloud in the CockroachCloud Release Notes.
Feature summary
This section summarizes the most significant user-facing changes in v21.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the release notes for previous testing releases. You can also search for what's new in v21.1 in our docs.
"Core" features are freely available in the core version and do not require an enterprise license. "Enterprise" features require an enterprise license. CockroachCloud clusters include all enterprise features. You can also use cockroach demo
to test enterprise features in a local, temporary cluster.
- SQL
- Recovery and I/O
- Database Operations
- Backward-incompatible changes
- Deprecations
- Known limitations
- Education
SQL
Version | Feature | Description |
---|---|---|
Enterprise | Multi-Region Improvements | It is now much easier to leverage CockroachDB's low-latency and resilient multi-region capabilities. For an introduction to the high-level concepts, see the Multi-Region Overview. For further details and links to related SQL statements, see Choosing a Multi-Region Configuration, When to Use ZONE vs. REGION Survival Goals, and When to Use REGIONAL vs. GLOBAL Tables. For a demonstration of these capabilities using a local cluster, see the Multi-Region Tutorial. Finally, for details about related architectural enhancements, see Non-Voting Replicas and Non-Blocking Transactions. |
Enterprise | Automatic Follower Reads for Read-Only Transactions | You can now force all read-only transactions in a session to use follower reads by setting the new default_transaction_use_follow_reads session variable to on . |
Core | Query Observability Improvements | EXPLAIN and EXPLAIN ANALYZE responses have been unified and extended with additional details, including automatic statistics-backed row estimates for EXPLAIN , and maximum memory usage, network usage, nodes used per operator, and rows used per operator for EXPLAIN ANALYZE . EXPLAIN ANALYZE now outputs a text-based statement plan tree by default, showing statistics about the statement processors at each phase of the statement.The Transactions Page and Statements Page of the DB Console also include such details as well the mean average time statements were in contention with other transactions within a specified time interval. The SQL Dashboard has been expanded with additional graphs for latency, contention, memory, and network traffic. The SQL Tuning with EXPLAIN tutorial and Optimize Statement Performance guidance have been updated to leverage these improvements. |
Core | Inverted Joins | CockroachDB now supports inverted joins, which force the optimizer to use an inverted index on the right side of the join. Inverted joins can only be used with INNER and LEFT joins. |
Core | Partial Inverted Indexes | You can now create a partial inverted index on a subset of JSON , ARRAY , or geospatial container column data. |
Core | Virtual Computed Columns | You can now create virtual computed columns, which are not stored on disk and are recomputed as the column data in the expression changes. |
Core | Dropping Values in User-Defined Types | It's now possible to drop values in user-defined types. |
Core | Sequence Caching | You can now create a sequence with the CACHE keyword to have the sequence cache its values in memory. |
Core | Changing Sequence & View Ownership | You can use the new OWNER TO parameter to change the owner of a sequence or view. |
Core | Show CREATE Statements for the Current Database |
You can now use SHOW CREATE ALL TABLES to return the CREATE statements for all of the tables, views, and sequences in the current database. |
Core | Storage of Z/M Coordinates for Spatial Objects | You can now store a third dimension coordinate (Z ), a measure coordinate (M ), or both (ZM ) with spatial objects. Note, however, that CockroachDB's spatial indexing is still based on the 2D coordinate system. This means that the Z/M dimension is not index accelerated when using spatial predicates, and some spatial functions ignore the Z/M dimension, with transformations discarding the Z/M value. |
Core | Third-Party Tool Support | Spatial libraries for Hibernate, ActiveRecord, and Django are now fully compatible with CockroachDB's spatial features. The DataGrip IDE and Liquibase schema migration tool are also now supported. |
Core | Connection Pooling Guidance | Creating the appropriate size pool of connections is critical to gaining maximum performance in an application. For guidance on sizing, validating, and using connection pools with CockroachDB, as well as examples for Java and Go applications, see Use Connection Pools. |
Core | PostgreSQL 13 Compatibility | CockroachDB is now wire-compatible with PostgreSQL 13. For more information, see PostgreSQL Compatibility. |
Recovery and I/O
Version | Feature | Description |
---|---|---|
Enterprise | Changefeed Topic Naming Improvements | New CHANGEFEED options give you more control over topic naming: The full_table_name option lets you use a fully-qualified table name in topics, subjects, schemas, and record output instead of the default table name, and can prevent unintended behavior when the same table name is present in multiple databases. The avro_schema_prefix option lets you use a fully-qualified schema name for a table instead of the default table name, and makes it possible for multiple databases or clusters to share the same schema registry when the same table name is present in multiple databases. |
Core | Running Jobs Asynchronously | You can use the new DETACHED option to run BACKUP , RESTORE , and IMPORT jobs asynchronously and receive a job ID immediately rather than waiting for the job to finish. This option enables you to run such jobs within transactions. |
Core | Import from Local Dump File | The new cockroach import command imports a database or table from a local PGDUMP or MYSQLDUMP file into a running cluster. This is useful for quick imports of 15MB or smaller. For larger imports, use the IMPORT statement. |
Core | Additional Import Options | New IMPORT options give you more control over the import process's behavior: The row_limit option limits the number of rows to import, which is useful for finding errors quickly before executing a more time- and resource-consuming import; the ignore_unsupported_statements option ignores SQL statements in PGDUMP files that are unsupported by CockroachDB; and the log_ignored_statements option logs unsupported statements to cloud storage or userfile storage when ignore_unsupported_statements is enabled. |
Core | Re-validating Indexes During RESTORE |
Incremental backups created by v20.2.2 and prior v20.2.x releases or v20.1.4 and prior v20.1.x releases may include incomplete data for indexes that were in the process of being created. Therefore, when incremental backups taken by these versions are restored by v21.1.0, any indexes created during those incremental backups will be re-validated by RESTORE . |
Database Operations
Version | Feature | Description |
---|---|---|
Core | Logging Improvements | Log events are now organized into logging channels that address different use cases. Logging channels can be freely mapped to log sinks and routed to destinations outside CockroachDB (including external log collectors). All logging aspects, including message format (e.g., JSON), are now configurable via YAML. |
Core | Cluster API v2.0 | This new API for monitoring clusters and nodes builds on prior endpoints, offering a consistent REST interface that's easier to use with your choice of tooling. The API offers a streamlined authentication process and developer-friendly reference documentation. |
Core | OpenShift-certified Kubernetes Operator |
You can now deploy CockroachDB on the Red Hat OpenShift platform using the latest OpenShift-certified Kubernetes Operator. |
Core | Auto TLS Certificate Setup | Using the new cockroach connect command, you can now let CockroachDB handle the creation and distribution among nodes of a cluster's CA (certificate authority) and node certificates. Note that this feature is an alpha release with core functionality that may not meet your requirements. |
Core | Built-in Timezone Library | The CockroachDB binary now includes a copy of the tzdata library, which is required by certain features that use time zone data. If CockroachDB cannot find the tzdata library externally, it will now use this built-in copy. |
Backward-incompatible changes
Before upgrading to CockroachDB v21.1, be sure to review the following backward-incompatible changes and adjust your deployment as necessary.
- Rows containing empty arrays in
ARRAY
columns are now contained in inverted indexes. This change is backward-incompatible because prior versions of CockroachDB will not be able to recognize and decode keys for empty arrays. Note that rows containingNULL
values in an indexed column will still not be included in inverted indexes. - Concatenation between a non-null argument and a null argument is now typed as string concatenation, whereas it was previously typed as array concatenation. This means that the result of
NULL || 1
will now beNULL
instead of{1}
. To preserve the old behavior, the null argument can be casted to an explicit type. - The payload fields for certain event types in
system.eventlog
have been changed and/or renamed. Note that the payloads insystem.eventlog
were undocumented, so no guarantee was made about cross-version compatibility to this point. The list of changes includes (but is not limited to):TargetID
has been renamed toNodeID
fornode_join
.TargetID
has been renamed toTargetNodeID
fornode_decommissioning
/node_decommissioned
/node_recommissioned
.NewDatabaseName
has been renamed toNewDatabaseParent
forconvert_to_schema
.grant_privilege
andrevoke_privilege
have been removed; they are replaced bychange_database_privilege
,change_schema_privilege
,change_type_privilege
, andchange_table_privilege
. Each event only reports a change for one user/role, so theGrantees
field was renamed toGrantee
.- Each
drop_role
event now pertains to a single user/role.
- The connection and authentication logging enabled by the cluster settings
server.auth_log.sql_connections.enabled
andserver.auth_log.sql_sessions.enabled
was previously using a text format which was hard to parse and integrate with external monitoring tools. This has been changed to use the standard notable event mechanism, with standardized payloads. The output format is now structured; see the reference documentation for details about the supported event types and payloads. - The format for SQL audit, execution, and query logs has changed from a crude space-delimited format to JSON. To opt out of this new behavior and restore the pre-v21.1 logging format, you can set the cluster setting
sql.log.unstructured_entries.enabled
totrue
. - The
cockroach debug ballast
command now refuses to overwrite the target ballast file if it already exists. This change is intended to prevent mistaken uses of theballast
command by operators. Scripts that integratecockroach debug ballast
can consider adding arm
command. - Removed the
kv.atomic_replication_changes.enabled
cluster setting. All replication changes on a range now use joint-consensus. - Currently, changefeeds connected to Kafka versions < v1.0 are not supported in CockroachDB v21.1.
Deprecations
- The CLI flags
--log-dir
,--log-file-max-size
,--log-file-verbosity
, and--log-group-max-size
are deprecated. Logging configuration can now be specified via the--log
parameter. See the Logging documentation for details. - The client-side command
\show
for the SQL shell is deprecated in favor of the new command\p
. This prints the contents of the query buffer entered so far. - Currently, Google Cloud Storage (GCS) connections default to the
cloudstorage.gs.default.key
cluster setting. This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify theAUTH
parameter you want to use. Similarly, if you are using thecloudstorage.gs.default.key
cluster setting to authorize your GCS connection, we recommend switching to useAUTH=specified
orAUTH=implicit
.AUTH=specified
will be the default behavior in v21.2 and beyond.
Known limitations
For information about new and unresolved limitations in CockroachDB v21.1, with suggested workarounds where applicable, see Known Limitations.
Education
Area | Topic | Description |
---|---|---|
Cockroach University | New Intro Courses | Introduction to Distributed SQL and CockroachDB teaches you the core concepts behind distributed SQL databases and describes how CockroachDB fits into this landscape. Practical First Steps with CockroachDB is a hands-on sequel that gives you the tools to get started with CockroachDB. |
Cockroach University | New Java Course | Fundamentals of CockroachDB for Java Developers walks you through building a full-stack vehicle-sharing app in Java using the popular Spring Data JPA framework with Spring Boot and a CockroachCloud Free cluster as the backend. |
Cockroach University | New Query Performance Course | CockroachDB Query Performance for Developers teaches you key CockroachDB features and skills to improve application performance and functionality, such as analyzing a query execution plan, using indexes to avoid expensive full table scans, improving sorting performance, and efficiently querying fields in JSON records. |
Docs | Quickstart | Documented the simplest way to get started with CockroachDB for testing and app development by using CockroachCloud Free. |
Docs | Developer Guidance | Published more comprehensive, task-oriented guidance for developers building applications on CockroachDB, including connecting to a cluster, designing a database schema, reading and writing data, optimizing query performance, and debugging applications. |
Docs | Connection Pooling | Added guidance on planning, configuring, and using connection pools with CockroachDB, as well as examples for Java and Go applications. |
Docs | Sample Apps on CockroachCloud Free |
Updated various Java, Python, Node.js, Ruby, and Go sample app tutorials to offer CockroachCloud Free as the backend. |
Docs | Licensing FAQs | Updated the Licensing FAQ to explain our licensing types, how features align to licenses, how to perform basic tasks around licenses (e.g., obtain, set, verify, monitor, renew), and other common questions. |
Docs | Product Limits | Added object sizing and scaling considerations, including specific hard limits imposed by CockroachDB and practical limits based on our performance testing and observations. |
Docs | System Catalogs | Documented important internal system catalogs that provide non-stored data to client applications. |