Things to remember when working with CrateDB are: - CrateDB is a distributed database written in Java, where individual nodes form a database cluster, using a shared-nothing architecture. - CrateDB brings together fundamental components to manage big data after the Hadoop and Spark batch-processing era, more like Teradata, BigQuery and Snowflake are doing it. - Clients can connect to CrateDB using HTTP or the PostgreSQL wire protocol. - The default TCP ports of CrateDB are 4200 for the HTTP interface and 5432 for the PostgreSQL interface. - The language of choice after connecting to CrateDB is to use SQL, mostly compatible with PostgreSQL's SQL dialect. - The data storage layer is based on Lucene, the data distribution layer was inspired by Elasticsearch. - Storage concepts of CrateDB include partitioning and sharding to manage data larger than fitting on a single machine. - CrateDB Cloud offers a managed option for running CrateDB and provides additional features like automated backups, data ingest / ETL utilities, or scheduling recurrent jobs. - Get started with CrateDB Cloud at `https://console.cratedb.cloud`. - CrateDB also provides an option to run it on your premises, optimally by using its Docker/OCI image `docker.io/crate`. Nightly images are available per `docker.io/crate/crate:nightly`... image:: docs/_static/crate-logo.svg :alt: CrateDB :target: https://cratedb.com ---- .. image:: https://github.com/crate/crate/workflows/CrateDB%20SQL/badge.svg?branch=master :target: https://github.com/crate/crate/actions?query=workflow%3A%22CrateDB+SQL%22 .. image:: https://img.shields.io/badge/docs-latest-brightgreen.svg :target: https://cratedb.com/docs/crate/reference/en/latest/ .. image:: https://img.shields.io/badge/container-docker-green.svg :target: https://hub.docker.com/_/crate/ | `Help us improve CrateDB by taking our User Survey! `_ About ===== CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of data in real-time. CrateDB offers the `benefits`_ of an SQL database *and* the scalability and flexibility typically associated with NoSQL databases. Modest CrateDB clusters can ingest tens of thousands of records per second without breaking a sweat. You can run ad-hoc queries using `standard SQL`_. CrateDB's blazing-fast distributed query execution engine parallelizes query workloads across the whole cluster. CrateDB is well suited to `containerization`_, can be `scaled horizontally`_ using ephemeral virtual machines (e.g., `Kubernetes`_, `AWS`_, and `Azure`_) with `no shared state`_. You can deploy and run CrateDB on any sort of network — from personal computers to `multi-region hybrid clouds and the edge`_. Features ======== - Use `standard SQL`_ via the `PostgreSQL wire protocol`_ or an `HTTP API`_. - Dynamic table schemas and queryable objects provide document-oriented features in addition to the relational features of SQL. - Support for time-series data, real-time full-text search, geospatial data types and search capabilities. - Horizontally scalable, highly available and fault-tolerant clusters that run very well in virtualized and containerized environments. - Extremely fast distributed query execution. - Auto-partitioning, auto-sharding, and auto-replication. - Self-healing and auto-rebalancing. - `User-defined functions`_ (UDFs) can be used to extend the functionality of CrateDB. Screenshots =========== CrateDB provides an `Admin UI`_: .. image:: crate-admin.gif :alt: Screenshots of the CrateDB Admin UI Try CrateDB =========== Run CrateDB via the official `Docker Image`_: .. code-block:: console sh$ docker run --publish 4200:4200 --publish 5432:5432 --env CRATE_HEAP_SIZE=1g crate -Cdiscovery.type=single-node Or visit the `installation documentation`_ to see all the available download and install options. Once you're up and running, head over to the `introductory docs`_. To interact with CrateDB, you can use the Admin UI `sql console`_ or the `CrateDB shell`_ CLI tool. Alternatively, review the list of recommended `clients and tools`_ that work with CrateDB. For container-specific documentation, check out the `CrateDB on Docker how-to guide`_ or the `CrateDB on Kubernetes how-to guide`_. Contributing ============ This project is primarily maintained by `Crate.io`_, but we welcome community contributions! See the `developer docs`_ and the `contribution docs`_ for more information. Security ======== The CrateDB team and community take security bugs seriously. We appreciate your efforts to `responsibly disclose`_ your findings, and will make every effort to acknowledge your contributions. If you think you discovered a security flaw, please follow the guidelines at `SECURITY.md`_. Help ==== Looking for more help? - Try one of our `beginner tutorials`_, `how-to guides`_, or consult the `reference manual`_. - Check out our `support channels`_. - `Crate.io`_ also offers `CrateDB Cloud`_, a fully-managed *CrateDB Database as a Service* (DBaaS). The `CrateDB Cloud Tutorials`_ will get you started. .. _Admin UI: https://cratedb.com/docs/crate/admin-ui/ .. _AWS: https://cratedb.com/docs/crate/tutorials/en/latest/cloud/aws/index.html .. _Azure: https://cratedb.com/docs/crate/tutorials/en/latest/cloud/azure/index.html .. _beginner tutorials: https://cratedb.com/docs/crate/tutorials/ .. _benefits: https://cratedb.com/product#compare .. _clients and tools: https://cratedb.com/docs/crate/clients-tools/ .. _containerization: https://cratedb.com/docs/crate/tutorials/en/latest/containers/docker.html .. _contribution docs: CONTRIBUTING.rst .. _Crate.io: https://cratedb.com/company/team .. _CrateDB clients and tools: https://cratedb.com/docs/crate/clients-tools/ .. _CrateDB Cloud Tutorials: https://cratedb.com/docs/cloud/ .. _CrateDB Cloud: https://cratedb.com/product/pricing .. _CrateDB on Docker how-to guide: https://cratedb.com/docs/crate/tutorials/en/latest/containers/docker.html .. _CrateDB on Kubernetes how-to guide: https://cratedb.com/docs/crate/tutorials/en/latest/containers/kubernetes/index.html .. _CrateDB shell: https://cratedb.com/docs/crate/crash/ .. _developer docs: devs/docs/index.rst .. _Docker image: https://hub.docker.com/_/crate/ .. _document-oriented: https://en.wikipedia.org/wiki/Document-oriented_database .. _Dynamic table schemas: https://cratedb.com/docs/crate/reference/en/master/general/ddl/column-policy.html .. _fulltext search: https://cratedb.com/docs/crate/reference/en/latest/general/dql/fulltext.html .. _geospatial features: https://cratedb.com/docs/crate/reference/en/master/general/dql/geo.html .. _how-to guides: https://cratedb.com/docs/crate/howtos/ .. _HTTP API: https://cratedb.com/docs/crate/reference/en/latest/interfaces/http.html .. _installation documentation: https://cratedb.com/docs/crate/tutorials/en/latest/basic/index.html .. _introductory docs: https://cratedb.com/docs/crate/tutorials/ .. _Kubernetes: https://cratedb.com/docs/crate/tutorials/en/latest/containers/kubernetes/index.html .. _multi-region hybrid clouds and the edge: https://cratedb.com/docs/cloud/en/latest/tutorials/edge/index.html .. _no shared state: https://en.wikipedia.org/wiki/Shared-nothing_architecture .. _PostgreSQL wire protocol: https://cratedb.com/docs/crate/reference/en/latest/interfaces/postgres.html .. _queryable objects: https://cratedb.com/docs/crate/reference/en/master/general/dql/selects.html#container-data-types .. _reference manual: https://cratedb.com/docs/crate/reference/ .. _relational: https://en.wikipedia.org/wiki/Relational_model .. _responsibly disclose: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure .. _scaled horizontally: https://stackoverflow.com/questions/11707879/difference-between-scaling-horizontally-and-vertically-for-databases .. _SECURITY.md: https://github.com/crate/crate/blob/master/SECURITY.md .. _sql console: https://cratedb.com/docs/crate/admin-ui/en/latest/console.html#sql-console .. _standard SQL: https://cratedb.com/docs/crate/reference/en/latest/sql/index.html .. _support channels: https://cratedb.com/support .. _time-series data: https://cratedb.com/docs/crate/howtos/en/latest/getting-started/normalize-intervals.html .. _user-defined functions: https://cratedb.com/docs/crate/reference/en/latest/general/user-defined-functions.html--- orphan: true --- # CrateDB Documentation Welcome to the official CrateDB Documentation. Whether you are a developer, database administrator, or just starting your journey with CrateDB, our documentation provides the information and knowledge needed to build real-time analytics and hybrid search applications that leverage CrateDB's unique features. :::{rubric} Benefits ::: * In a unified data platform approach, CrateDB includes analyzing relational, JSON, time-series, geospatial, full-text, and vector data within a single system, eliminating the need for multiple databases. * The fully distributed SQL query engine, built on top of Apache Lucene, and inheriting technologies from Elasticsearch/OpenSearch, provides performant aggregations and advanced SQL features like JOINs and CTEs on large datasets of semi-structured data. * Real-time indexing automatically indexes all columns, including nested structures, as data is ingested, eliminating the need to worry about indexing strategy. * The flexible data schema dynamically adapts based on the data you ingest, offering seamless integration and instant readiness for analysis. * Columnar storage enables fast search query and aggregation performance. * PostgreSQL wire protocol compatibility and a HTTP interface provide versatile integration capabilities. * AI-ready: The vector store subsystem integrates well with an extensive 3rd party ecosystem of AI/ML frameworks for advanced data analysis and data-driven decisions. ::::::{grid} 1 :margin: 1 :padding: 2 :::{grid-item-card} {material-outlined}`rocket_launch;1.7em` CrateDB Cloud :link: cloud-docs-index :link-type: ref :link-alt: CrateDB Cloud :padding: 2 :class-title: sd-fs-5 Start with a fully managed CrateDB instance to accelerate and simplify working with analytical data. CrateDB Cloud enables seamless deployment, monitoring, backups, and scaling of CrateDB clusters on AWS, Azure or GCPs, eliminating the need for direct database management. With CrateDB Cloud, you can skip infrastructure setup and focus on delivering value for your business with a query console, SQL Scheduler, table policies and various connectors to import data. +++ ```{button-link} https://cratedb.com/docs/cloud/tutorials/quick-start.html :color: primary :expand: **Start forever free cluster with 8 GB of storage** ``` ::: :::::{grid-item} :margin: 0 :padding: 2 ::::{grid} 2 :margin: 0 :padding: 0 :::{grid-item-card} {material-outlined}`lightbulb;1.7em` Database Features :link: https://cratedb.com/docs/guide/feature/ :link-alt: Database Features :class-title: sd-fs-5 Explore all functional, operational and advanced features of CrateDB at a glance. ::: :::{grid-item-card} {material-outlined}`auto_stories;1.7em` Database Manual :link: https://cratedb.com/docs/reference/ :link-alt: Database Manual :class-title: sd-fs-5 Learn core CrateDB concepts, including data modeling, querying data, aggregations, sharding, and more. ::: :::: ::::: :::{grid-item-card} {material-outlined}`link;1.7em` Client Libraries :link: https://cratedb.com/docs/crate/clients-tools/en/latest/connect/ :link-alt: CrateDB: Client Drivers and Libraries :padding: 2 :class-title: sd-fs-5 Learn how to connect your applications using database drivers, libraries, adapters, and connectors. CrateDB supports both the [HTTP protocol] and the [PostgreSQL wire protocol], ensuring compatibility with many PostgreSQL clients. Through corresponding drivers and adapters, CrateDB is compatible with [ODBC], [JDBC], and other database API specifications. ::: :::::: ## Learn :::{rubric} Videos ::: ::::{card} Today's data challenges and a high level overview of CrateDB :class-title: sd-fs-4 :class-body: sd-text-center :class-footer: sd-fs-6 :::{youtube} cByAOsaYddQ ::: +++ _Webinar: Turbocharge your aggregations, search & AI models & get real-time insights._ :::{div} text-smaller Discover CrateDB, the leading real-time analytics database. It provides the flexibility, speed, and scalability necessary to master today's data challenges. Watch this video to learn how CrateDB empowers you with real-time insights into your data to fuel advanced analytics, search, and AI models—enabling informed decisions that drive meaningful impact. ::: :::: ::::{card} CrateDB Videos curated by Simon Prickett :class-footer: sd-fs-6 Simon leads Developer Relations at CrateDB. Here, he is [sharing a playlist of videos] he has been part of that will show you what CrateDB is and how you can use it for a variety of projects. Make sure you also do not miss relevant [CrateDB customer stories]. :::: :::{rubric} Introduction ::: Learn about the fundamentals of CrateDB, guided and self-guided. ::::{grid} 2 2 4 4 :padding: 0 :::{grid-item-card} :link: https://cratedb.com/docs/guide/getting-started.html :link-alt: Getting started with CrateDB :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Getting Started ^^^ {material-outlined}`not_started;3.5em` +++ Learn how to interact with the database for the first time. ::: :::{grid-item-card} :link: https://cratedb.com/docs/guide/ :link-alt: The CrateDB Guide :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller The CrateDB Guide ^^^ {material-outlined}`hiking;3.5em` +++ Guides and tutorials about how to use CrateDB in practice. ::: :::{grid-item-card} :link: https://learn.cratedb.com/ :link-alt: The CrateDB Academy :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Academy Courses ^^^ {material-outlined}`school;3.5em` +++ A learning hub dedicated to data enthusiasts. ::: :::{grid-item-card} :link: https://community.cratedb.com/ :link-alt: The CrateDB Community Portal :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Community Portal ^^^ {material-outlined}`groups;3.5em` +++ A hangout place for members of the CrateDB community. ::: :::: :::{rubric} Admin Tools ::: Learn about the fundamental tools that support working directly with CrateDB. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} Admin UI :link: https://cratedb.com/docs/crate/admin-ui/ :link-alt: The CrateDB Admin UI :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`admin_panel_settings;3.5em` +++ Learn about CrateDB's included web administration interface. ::: :::{grid-item-card} Crash CLI :link: https://cratedb.com/docs/crate/crash/ :link-alt: The Crash CLI :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`terminal;3.5em` +++ A command-line interface (CLI) tool for working with CrateDB. ::: :::: :::{rubric} Drivers and Integrations ::: Learn about database client libraries, drivers, adapters, connectors, and integrations with 3rd-party applications and frameworks. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} Ecosystem Catalog :link: catalog :link-type: ref :link-alt: Ecosystem Catalog :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`category;3.5em` +++ Discover integrations and solutions from the open-source community and CrateDB partners. ::: :::{grid-item-card} Integration Tutorials I :link: integrate :link-type: ref :link-alt: Integration Tutorials I :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`integration_instructions;3.5em` +++ Learn about the variety of options to connect and integrate with 3rd-party applications. ::: :::{grid-item-card} Integration Tutorials II :link: https://community.cratedb.com/t/overview-of-cratedb-integration-tutorials/1015 :link-alt: Integration Tutorials II :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`local_library;3.5em` +++ Integration-focused tutorials to help you use CrateDB together with other tools and libraries. ::: :::: ## Examples Learn how to use CrateDB by digesting concise examples. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} CrateDB Examples :link: https://github.com/crate/cratedb-examples :link-alt: CrateDB Examples :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`play_circle;3.5em` +++ A collection of clear and concise examples how to work with CrateDB. ::: :::{grid-item-card} Sample Apps :link: https://github.com/crate/crate-sample-apps/ :link-alt: CrateDB Sample Apps :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`apps;3.5em` +++ Different client libraries used by canonical guestbook demo web applications. ::: ::::
---- **Resources:** [Academy] • [Blog] • [Community] • [Customers] • [Examples] • [GitHub] • [Guide] • [Support] [Academy]: https://learn.cratedb.com/ [Blog]: https://cratedb.com/blog [Community]: https://community.cratedb.com/ [CrateDB customer stories]: https://www.youtube.com/playlist?list=PLDZqzXOGoWUJrAF_lVx9U6BzAGG9xYz_v [Customers]: https://cratedb.com/customers [Examples]: https://github.com/crate/cratedb-examples [GitHub]: https://github.com/crate [Guide]: https://cratedb.com/docs/guide/ [HTTP protocol]: https://en.wikipedia.org/wiki/HTTP [Integrations]: #integrate [JDBC]: https://en.wikipedia.org/wiki/Java_Database_Connectivity [ODBC]: https://en.wikipedia.org/wiki/Open_Database_Connectivity [PostgreSQL wire protocol]: https://www.postgresql.org/docs/current/protocol.html [sharing a playlist of videos]: https://www.youtube.com/playlist?list=PL3cZtICBssphXl5rHgsgG9vTNAVTw_Veq [Support]: https://cratedb.com/support/
(feature)= (features)= (all-features)= # All Features :::{include} /_include/styles.html ::: All features of CrateDB at a glance. :::::{grid} 1 3 3 3 :margin: 4 4 0 0 :padding: 0 :gutter: 2 ::::{grid-item-card} {material-outlined}`lightbulb;2em` Functional :::{toctree} :maxdepth: 1 sql/index connectivity/index document/index relational/index Search: FTS, Geo, Vector, Hybrid blob/index ::: +++ CrateDB combines the power of Lucene with the advantages of industry-standard SQL. :::: ::::{grid-item-card} {material-outlined}`group;2em` Operational :::{toctree} :maxdepth: 1 cluster/index snapshot/index cloud/index storage/index index/index ::: +++ CrateDB scales horizontally using a shared-nothing architecture, inherited from Elasticsearch. :::: ::::{grid-item-card} {material-outlined}`read_more;2em` Advanced :::{toctree} :maxdepth: 1 query/index generated/index cursor/index fdw/index udf/index ccr/index ::: +++ Advanced features supporting daily data operations, all based on standard SQL. :::: ::::: :::{rubric} Connect and Integrate ::: Connect to CrateDB using traditional database drivers, and integrate CrateDB with popular 3rd-party applications in open-source and proprietary software landscapes. ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :padding: 0 :gutter: 2 :::{grid-item-card} {material-outlined}`link;2em` Connectivity :link: connectivity :link-type: ref :link-alt: About connection options with CrateDB Connect to your CrateDB cluster using drivers, frameworks, and adapters. +++ **What's inside:** Connectivity and integration options with database drivers and applications, libraries, and frameworks. ::: :::{grid-item-card} {material-outlined}`sync;2em` Import and Export :link: import-export :link-type: ref :link-alt: About time series data import and export Import data into and export data from your CrateDB cluster. +++ **What's inside:** A variety of options to connect and integrate with 3rd-party ETL applications. ::: :::: :::{rubric} Highlights ::: Important fundamental features of CrateDB, and how they are applied within software solutions and application platforms in different scenarios and environments. ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :padding: 0 :gutter: 2 :::{grid-item-card} {material-outlined}`description;2em` Document Store :link: document :link-type: ref :link-alt: About CrateDB's OBJECT data type Learn about CrateDB's OBJECT data type, how to efficiently store JSON or other structured data, also nested, and how to query this data with ease, fully indexed thus performant from the start, optionally using relational joins. +++ **What's inside:** CrateDB can do the same like Lotus Notes / Domino, CouchDB, MongoDB, and PostgreSQL's JSON data type. ::: :::{grid-item-card} {material-outlined}`manage_search;2em` Full-Text Search :link: fulltext :link-type: ref :link-alt: About CrateDB's full-text search capabilities Learn about CrateDB's Okapi BM25 implementation, how to set up your database for full-text search, and how to query text data efficiently, to make sense of large volumes of unstructured information. +++ **What's inside:** Like Elasticsearch and Solr, CrateDB is based on Lucene, the premier industry-grade full-text search engine library. ::: :::: :::{rubric} Quotes ::: > When using CrateDB, a project that got started around the same time, it's like you've stumbled into an alternative reality where Elastic is a proper database. > > -– Henrik Ingo, Nyrkiö Oy, independent database consultant, MongoDB > CrateDB enables use cases we couldn't satisfy with other database systems, also with databases which are even stronger focused on the time series domain. > > CrateDB is not your normal database! > > -- Daniel Hölbling-Inzko, Director of Engineering Analytics, Bitmovin.. _index: ================= CrateDB Reference ================= CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of data in real-time. .. NOTE:: This resource assumes you know the basics. If not, check out the `Tutorials`_ section for beginner material. .. SEEALSO:: CrateDB is an open source project and is `hosted on GitHub`_. .. rubric:: Table of contents .. toctree:: :maxdepth: 2 concepts/index cli-tools config/index general/index admin/index sql/index interfaces/index appendices/index .. _Tutorials: https://crate.io/docs/crate/tutorials/en/latest/ .. _hosted on GitHub: https://github.com/crate/crate(index)= (cloud-docs-index)= # CrateDB Cloud CrateDB Cloud is a fully managed, terabyte-scale, and cost-effective analytics database that lets you run analytics over vast amounts of data in near real time, even with complex queries. It is an SQL database service for enterprise data warehouse workloads, that works across clouds and scales with your data. :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slimmer :columns: 6 :::{rubric} Database Features ::: CrateDB Cloud helps you manage and analyze your data with procedures like machine learning, geospatial analysis, and business intelligence. CrateDB Cloud's scalable, distributed analysis engine lets you query terabytes worth of data efficiently. CrateDB provides a rich data model including container-, geospatial-, and vector-data types, and capabilities for full-text search. :::: ::::{grid-item} :class: rubric-slimmer :columns: 6 :::{rubric} Operational Benefits ::: CrateDB Cloud is a fully managed enterprise service, allowing you to deploy, monitor, back up, and scale your CrateDB clusters in the cloud without the need to do database management yourself. With CrateDB Cloud, there's no infrastructure to set up or manage, letting you focus on finding meaningful insights using plain SQL, and taking advantage of flexible pricing models across on-demand and flat-rate options. :::: ::::: :::{rubric} Learn ::: Users around the world rely on CrateDB Cloud clusters to store billions of records and terabytes of data, all accessible without delays. If you want to start using CrateDB Cloud, or make the most of your existing subscription, we are maintaining resources and tutorials to support you correspondingly. ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {octicon}`rocket` Quick Start :link: quick-start :link-type: ref Learn how to sign up and get started with a free cluster. ::: :::{grid-item-card} {octicon}`file-code` Import :link: cluster-import :link-type: ref Import your own data into your CrateDB Cloud cluster. ::: :::{grid-item-card} {octicon}`table` Console :link: cluster-console :link-type: ref Explore your data and execute SQL queries in the Console. ::: :::{grid-item-card} {octicon}`tools` Manage :link: cloud-howtos-index :link-type: ref Learn how to manage your cluster. ::: :::{grid-item-card} {octicon}`terminal` Croud CLI :link: cluster-deployment-croud :link-type: ref A command-line tool to operate your managed clusters. ::: :::: Do you want to learn about key database drivers and client applications for CrateDB, such as CrateDB Admin UI, crash, psql, DataGrip, and DBeaver? Discover how to configure these tools and explore CrateDB's compatibility with analytics, ETL, BI, and monitoring solutions. ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {material-outlined}`table_chart` Admin UI :link: crate-admin-ui:index :link-type: ref Each CrateDB Cloud cluster offers a dedicated Admin UI, which can be used to explore data, schema metadata, and cluster status information. ::: :::{grid-item-card} {material-outlined}`link` Clients, Tools, and Integrations :link: crate-clients-tools:index :link-type: ref Learn about compatible client applications and tools, and how to configure your favorite client library to connect to a CrateDB cluster. ::: :::: :::{note} Like [CrateDB itself], this is an open source documentation project. [Suggestions for improvements], and [source code contributions], are always welcome. {fab}`github` ::: :::{toctree} :maxdepth: 1 :hidden: Quick Start Services Import Console Automation Integrations Export Backups Manage Cluster Billing Access Management Networking & Connectivity API How Tos Croud CLI Reference ::: [CrateDB]: https://crate.io/product/ [Croud CLI]: https://crate.io/docs/cloud/cli/ [How-To Guides]: https://crate.io/docs/cloud/en/latest/howtos/ [Reference]: https://crate.io/docs/cloud/en/latest/reference/ [CrateDB itself]: https://github.com/crate/crate [source code contributions]: https://github.com/crate/cloud-docs/tree/main/docs [suggestions for improvements]: https://github.com/crate/cloud-docs/issues.. _sharding_guide: .. _sharding-performance: ========================== Sharding Performance Guide ========================== This document is a sharding best practice guide for CrateDB. A brief recap: CrateDB tables are split into a configured number of shards, and then these shards are distributed across the cluster. Figuring out how many shards to use for your tables requires you to think about the type of data you're processing, the types of queries you're running, and the type of hardware you're using. .. NOTE:: This guide assumes you know the basics. If you are looking for an intro to sharding, see :ref:`sharding `. .. rubric:: Table of contents .. contents:: :local: Optimising for query performance ================================ .. _sharding-under-allocation: Under-allocation is bad ----------------------- .. CAUTION:: If you have fewer shards than CPUs in the cluster, this is called *under-allocation*, and it means you're not getting the best performance out of CrateDB. Whenever possible, CrateDB will parallelize query workloads and distribute them across the whole cluster. The more CPUs this query workload can be distributed across, the faster the query will run. To increase the chances that a query can be parallelized and distributed maximally, there should be at least as many shards for a table than there are CPUs in the cluster. This is because CrateDB will automatically balance shards across the cluster so that each node contains as few shards as possible. In summary: the smaller your shards are, the more of them you will have, and so the more likely it is that they will be distributed across the whole cluster, and hence across all of your CPUs, and hence the faster your queries will run. Significant over-allocation is bad ---------------------------------- .. CAUTION:: If you have more shards per table than CPUs, this is called *over-allocation*. A little over-allocation is desirable. But if you significantly over-allocate your shards per table, you will see performance degradation. When you have slightly more shards per table than CPUs, you ensure that query workloads can be parallelized and distributed maximally, which in turn ensures maximal query performance. However, if most nodes have more shards per table than they have CPUs, you could actually see performance degradation. Each shard comes with a cost in terms of open files, RAM, and CPU cycles. Smaller shards also means small shard indexes, which can adversely affect computed search term relevance. For performance reasons, one thousand shards per table per node is considered the highest recommended configuration. If you exceed this you will experience a failing cluster check. Balancing allocation -------------------- Finding the right balance when it comes to sharding will vary on a lot of things. And while it's generally advisable to slightly over-allocate, it's also a good idea to benchmark your particular setup so as to find the sweet spot. If you don't manually set the number of shards per table, CrateDB will make a best guess, based on the assumption that your nodes have two CPUs each. .. TIP:: For the purposes of calculating how many shards a table should be clustered into, you can typically ignore replica partitions as these are not usually queried across for reads. .. CAUTION:: If you are using :ref:`partitioned tables `, note that each partition is clustered into as many shards as you configure for the table. For example, a table with four shards and two partitions will have eight shards that can be commonly queried across. But a query that only touches one partition will only query across four shards. How this factors into balancing your shard allocation will depend on the types of queries you intend to run. .. _sharding_ingestion: Optimising for ingestion performance ==================================== As with `Optimising for query performance`_, when doing heavy ingestion, it is good to cluster a table across as many nodes as possible. However, `we have found`_ that ingestion throughput can often increase as the table shard per CPU ratio on each node *decreases*. Ingestion throughput typically varies on: data volume, individual payload sizes, batch insert size, and the hardware. In particular: using solid-state drives (SSDs) instead of hard-disk drives (HDDs) can massively increase ingestion throughput. It's a good idea to benchmark your particular setup so as to find the sweet spot. .. _we have found: https://cratedb.com/blog/big-cluster-insights-ingesting.. _performance-optimization: ######################## Query Optimization 101 ######################## This article covers some essential principles for optimizing queries in CrateDB while avoiding the most common pitfalls. The patterns are relevant to both the troubleshooting of slow queries and the proactive tuning of CrateDB deployments, and they show how small adjustments to filters, data transformations, and schemas can yield dramatic improvements in execution speed and resource utilization. .. _group-early-filtering: ************************************ Early Filtering and Data Reduction ************************************ This section focuses on minimizing data processed early in queries to reduce overhead. .. _filtering-early: Do all filtering as soon as possible ==================================== Sometimes it may be tempting to define some VIEWs, some CTEs, do some JOINs, and only filter results at the end, but in this context the optimizer may lose track of how the fields we are filtering on relate to the indexes on the actual tables. Whenever there is an opportunity to filter data immediately next to the ``FROM`` clause, try to narrow down results as early as possible. See `using common table expressions to speed up queries`_ for an example. .. _select-star: Avoid ``SELECT *`` ================== CrateDB is a columnar database. The fewer columns you specify in a ``SELECT`` clause, the less data CrateDB needs to read from disk. .. code:: sql -- Avoid selecting all columns SELECT * FROM customers; -- Instead, select explicitly the subset of columns you need SELECT customerid, country FROM customers; .. _minimise-result-sets: Avoid large result sets ======================= Be aware of the number of rows you are returning in a ``SELECT`` query. Analytical databases, such as CrateDB, excel at processing large data sets and returning small to medium-sized result sets. Serializing, transporting them over the network, and deserializing large result sets is expensive. When dealing with large result sets in the range of several hundred thousand records, consider whether your application needs the whole result set at once. Use `cursors`_ or ``LIMIT``/``OFFSET`` to fetch data in batches. See also `Fetching large result sets from CrateDB`_ for examples. .. _propagate-limit: Propagate LIMIT clauses when applicable ======================================= Similarly to the above, we may have for instance a ``LIMIT 10`` at the end of the query and to get there it may have been sufficient to only pull 10 records (or some other number of records) at an earlier stage from some given table. If that is the case duplicate or move (depending on the specific query) the ``LIMIT`` clause to the relevant place. In some cases, we may not know how many rows we need in the intermediate working sets but we know that there will be 10 records on the last day. Doing filtering early will help the optimizer and can protect the database from accidentally processing years of data. By not filtering early, the load on your cluster will increase tremendously. So for instance instead of: .. code:: sql SELECT factory_metadata.factory_name, device_data.device_name, device_data.reading_value FROM device_data INNER JOIN factory_metadata ON device_data.factory_id = factory_metadata.factory_id WHERE reading_time BETWEEN '2024-01-01' AND '2025-01-01' LIMIT 10; do: .. code:: sql WITH filtered_device_data AS ( SELECT device_data.factory_id, device_data.device_name, device_data.reading_value FROM device_data WHERE /* We are sure one month of data is sufficient to find 10 results and it may help with partition pruning */ reading_time BETWEEN '2024-12-01' AND '2025-01-01' LIMIT 10 ) SELECT factory_metadata.factory_name, filtered_device_data.device_name, filtered_device_data.reading_value FROM filtered_device_data INNER JOIN factory_metadata ON filtered_device_data.factory_id = factory_metadata.factory_id; .. _filter-with-array-expressions: Use filters with array expressions when filtering on the output of UNNEST ========================================================================= On denormalized data sets, you may observe records including columns storing arrays of objects. You may want to unnest the array in a subquery or CTE and later filter on a property of the OBJECTs. The next statement (in versions of CrateDB < 6.0.0) will result in every row in the table (not filtered with other conditions) being read and unnested, to check if it meets the criteria on ``field1``. .. code:: sql SELECT * FROM ( SELECT UNNEST(my_array_of_objects) obj FROM my_table ) WHERE obj['field1'] = 1; However, CrateDB can do a lot better than this if we add an additional condition like this: .. code:: sql SELECT * FROM ( SELECT UNNEST(my_array_of_objects) obj FROM my_table WHERE 1 = ANY(my_array_of_objects['field1']) ) AS subquery WHERE obj['field1'] = 1; CrateDB leverages indexes to only unnest the relevant records from ``my_table`` which can make a huge difference. .. _group-efficient-query-structure: ****************************************** Efficient Query Structure and Constructs ****************************************** This section focuses on optimizing SQL logic by prioritizing efficient syntax and avoiding redundant operations. .. _only-sort-when-needed: Only sort data when needed ========================== Indexing in CrateDB is optimized to support filtering and aggregations without requiring expensive defragmentation operations, but it is not optimized for sorting​. Maintaining a sorted index would slow down ingestion, that is why​ other analytical database systems like Cassandra and Redshift make similar trade-offs​. This means that when an ``ORDER BY`` operation is requested, the whole dataset needs to be loaded into the main memory on the relevant cluster node to be sorted. For this reason, it is important to not request ``ORDER BY`` operations when not actually needed, and most importantly, not on tables of large cardinalities without aggregating records beforehand. On the other hand, of course it is no problem to sort a few thousand rows in the final stage of a ``SELECT`` operation, but we need to avoid requesting sort operations over millions of rows. Consider leveraging filters and aggregations like ``max_by`` and ``min_by`` to limit the scope of ``ORDER BY`` operations, or avoid them altogether when possible. So for instance instead of: .. code:: sql SELECT reading_time, reading_value FROM device_data WHERE reading_time BETWEEN '2024-01-01' AND '2025-01-01' ORDER BY reading_time DESC LIMIT 10; use: .. code:: sql SELECT reading_time, reading_value FROM device_data WHERE reading_time BETWEEN '2024-12-20' AND '2025-01-01' ORDER BY reading_time DESC LIMIT 10; .. _format-as-last-step: Format output as a last step ============================ In many cases, data may be stored in an efficient format, but we want to transform it to make it more human-readable in the output of the query. To accomodate such situations, we may use `scalar functions`_ such as ``date_format`` or ``timezone``. Sometimes queries apply these transformations in an intermediate step and later do further operations like filtering on the transformed values. CrateDB's query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans. Based on the query scenario/situation, it is always aiming to use existing indexes on the original data for maximum efficiency. However, there is always a chance that some particular clause in the query expression prevents the optimizer from selecting an optimal plan, ending up applying the transformation on thousands or millions of records that later would be discarded anyway. So, whenever it makes sense, we want to ensure these transformations are only applied after the database has already worked out the final result set to be sent back to the client. So instead of: .. code:: sql WITH mydata AS ( SELECT DATE_FORMAT(device_data.reading_time) AS formatted_reading_time, device_data.reading_value FROM device_data ) SELECT * FROM mydata WHERE formatted_reading_time LIKE '2025%'; use: .. code:: sql SELECT DATE_FORMAT(device_data.reading_time) AS formatted_reading_time, device_data.reading_value FROM device_data WHERE device_data.reading_time BETWEEN '2025-01-01' AND '2026-01-01' .. _replace-case: Replace CASE in expressions used for filtering, JOINs, grouping, etc ==================================================================== It is not always obvious to the optimizer what we may be trying to do with a ``CASE`` expression (see for instance `Shortcut CASE evaluation Issue 16022`_). If you are using CASE expression for “formatting” see the previous point about formatting output as late as possible, but if you are using a CASE expression as part of a filter of other operation consider replacing it with an equivalent expression, for instance: .. code:: sql SELECT SUM(a) as count_greater_than_10,... FROM ( SELECT CASE WHEN field1 > 10 THEN 1 ELSE 0 END , ... FROM mytable ... ) subquery ...; can be rewritten as .. code:: sql SELECT COUNT(field1) FILTER (WHERE field1 > 10) as count_greater_than_10 FROM mytable; And .. code:: postgresql SELECT * FROM mytable WHERE CASE WHEN $1 = 'ALL COUNTRIES' THEN true WHEN $1 = mytable.country AND $2 = 'ALL CITIES' THEN true ELSE $1 = mytable.country AND $2 = mytable.city END; can be rewritten as .. code:: postgresql SELECT * FROM mytable WHERE ($1 = 'ALL COUNTRIES') OR ($1 = mytable.country AND $2 = 'ALL CITIES') OR ($1 = mytable.country AND $2 = mytable.city) (the exact replacement expressions of course depend on the semantics of each case) .. _groups-instead-distinct: Use groupings instead of DISTINCT ================================= (Reference: `Issue 13818`_) Instead of .. code:: sql SELECT DISTINCT country FROM customers; use .. code:: sql SELECT country FROM customers GROUP BY country; and instead of .. code:: sql SELECT COUNT(DISTINCT a) FROM t; use .. code:: sql SELECT COUNT(a) FROM ( SELECT a FROM t GROUP BY a ) tmp; .. _subqueries-instead-groups: Use subqueries instead of GROUP BY if the groups are already known ================================================================== Consider the following query: .. code:: sql SELECT customerid, SUM(order_amount) AS total FROM customer_orders GROUP BY customerid; This looks simple but to execute it CrateDB needs to keep the full result set in memory for all groups. If we already know what the groups will be we can use correlated subqueries instead: .. code:: sql SELECT customerid, (SELECT SUM(order_amount) FROM customer_orders WHERE customer_orders.customerid = customers.customerid ) AS total FROM customers; .. _group-large-and-complex-queries: ************************************ Handling Large and Complex Queries ************************************ This section discusses strategies for breaking down complex operations on large datasets into manageable steps. .. _batch-operations: Batch operations ================ If you need to perform lots of UPDATEs or expensive INSERTs from SELECT, consider exploring different settings for the `overload protection`_ or `thread pool sizing`_ which can be used to fine tune the performance for these operations. Otherwise, if you only need to run it once and performance is not critical, consider using small batches instead, where the operations are done on groups of records each time. So for instance instead of doing: .. code:: sql UPDATE mytable SET field1 = field1 + 1; consider a different approach such as: .. code:: shell for id in {1..100}; do crash -c "UPDATE mytable SET field1 = field1 + 1 WHERE customer_id = $id;" done .. _pagination-filters: Paginate on filters instead of results ====================================== For instance instead of .. code:: sql SELECT deviceid, AVG(field1) FROM device_data GROUP BY deviceid LIMIT 1000 OFFSET 5000; We can do something like .. code:: sql WITH devices AS ( SELECT deviceid FROM devices LIMIT 5 OFFSET 25 ) SELECT deviceid, AVG(field1) FROM device_data WHERE device_data.deviceid IN (SELECT devices.deviceid FROM devices) GROUP BY deviceid; .. _staging-tables: Use staging tables for intermediate results if you are doing a lot of JOINs =========================================================================== If you have many CTEs or VIEWs with a need to JOIN them, it can be benefical to query them individually, store intermediate results into dedicated tables, and then use these tables for JOINing. While there is a cost in writing to disk and reading data back, the whole operation can benefit from indexing and from giving the optimizer more straightforward execution plans, to enable it optimizing for better parallel execution using multiple cluster nodes. .. _group-schema-and-function-optimization: ********************************** Schema and Function Optimization ********************************** This section focuses on schema design and function usage to streamline performance. .. _consider-generated-columns: Consider generated columns ========================== If you frequently find yourself extracting information from fields and then using this extracted data on filters or aggregations, it can be good to consider doing this operation on ingestion with a `generated column`_ . In this way the value we need for filtering and aggregations can be indexed. This involves a trade-off between storage space and query performance, evaluate the frequency and execution times of these queries with the additional storage requirements of storing the generated value. See `Using regex comparisons and other features for inspection of logs`_ for an example. .. _udf-right-context: Be mindful of UDFs, leverage them in the right contexts, but only in the right contexts ======================================================================================= When using user-defined functions (UDFs), two important details relevant for performance aspects need to be considered. #. Once values are processed by an UDF, the database engine will load results into memory, and will not be able to leverage indexes on the underlying fields any longer. In this spirit, please apply the relevant general considerations about delaying formatting as much as possible. #. UDFs run on a JavaScript virtual machine on a single thread, so they can have an impact on performance as relevant operations cannot be parallelized. However, some operations may be more straightforward to do in JavaScript than SQL. .. _group-filter-expression-optimizations: This section discusses expressions that improve filter efficiency and handling of specific data Structures. ************************************ Filter and Expression Optimization ************************************ .. _positive-filters: Avoid expression negation in filters ==================================== Positive filter expressions can directly leverage indexing. With negative expressions, the optimizer may be able to still use indexes, but this may not always happen and the optimizer might not rewrite the query optimally. Explicitly using positive conditions removes ambiguity and ensures the most efficient path is chosen. So instead of: .. code:: sql SELECT customerid, status FROM customers_table WHERE NOT (customerid <= 2) AND NOT (status = 'inactive'); We can rewrite this as: .. code:: sql SELECT customerid, status FROM customers_table WHERE customerid > 3 AND status = 'active'; .. _use-null-or-empty: Use the special null_or_empty function with OBJECTs and ARRAYs when relevant ============================================================================ CrateDB has a special scalar function called null_or_empty_ , using this in filter conditions against OBJECTs and ARRAYs is much faster than using an ``IS NULL`` clause, if allowing empty objects and arrays is acceptable. So instead of: .. code:: sql SELECT ... FROM mytable WHERE array_column IS NULL OR array_column = []; We can rewrite this as: .. code:: sql SELECT ... FROM mytable WHERE null_or_empty(array_column); .. _group-performance-analysis: ****************************************** Performance Analysis and Execution Plans ****************************************** .. _execution-plans: Review execution plans ====================== If a query is slow but still completes in a certain amount of time, we can use `EXPLAIN ANALYZE`_ to get a detailed execution plan. The main thing to watch for on these is ``MatchAllDocsQuery`` and ``GenericFunctionQuery``. These operations are full table scans, so you may want to review if that is expected in your query (you may actually intentionally be pulling all records from a table with a list of factory sites for instance) or if this is about a filter that is not being pushed down properly. .. _explain analyze: https://cratedb.com/docs/crate/reference/en/latest/sql/statements/explain.html .. _fetching large result sets from cratedb: https://community.cratedb.com/t/fetching-large-result-sets-from-cratedb/1270 .. _overload protection: https://cratedb.com/docs/crate/reference/en/latest/config/cluster.html#overload-protection .. _thread pool sizing: https://cratedb.com/docs/crate/reference/en/latest/config/cluster.html#thread-pools .. _generated column: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/generated-columns.html .. _issue 13818: https://github.com/crate/crate/issues/13818 .. _null_or_empty: https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html#null-or-empty-object .. _scalar functions: https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html .. _shortcut case evaluation issue 16022: https://github.com/crate/crate/issues/16022 .. _using common table expressions to speed up queries: https://community.cratedb.com/t/using-common-table-expressions-to-speed-up-queries/1719 .. _using regex comparisons and other features for inspection of logs: https://community.cratedb.com/t/using-regex-comparisons-and-other-advanced-database-features-for-real-time-inspection-of-web-server-logs/1564 .. _cursors: https://cratedb.com/docs/crate/reference/en/latest/sql/statements/declare.html.. _performance-scaling: ################## Design for scale ################## This article explores critical design considerations to successfully scale CrateDB in large production environments to ensure performance and reliability as workloads grow. .. _mindful-of-memory: ******************************* Be mindful of memory capacity ******************************* In CrateDB, operations requiring a working set like groupings, aggregations, and sorting are performed fully in memory without spilling over to disk. Sometimes you may have a query that leads to a sub-optimal execution plan requiring lots of memory. If you are coming to CrateDB from other database systems, your experience may be that these queries will proceed to run taking longer than required and impacting other workloads in the meanwhile. Sometimes this effect may be obvious if a query takes a lot of resources and runs for a long time, other times it may go unnoticed if a query that could complete in say 100 milliseconds takes one hundred times longer, 10 seconds, but the users put up with it without reporting to you. If a query would require more heap memory than the interested nodes have available the query will fail with a particular type of error message that we call a ``CircuitBreakerException``. This is a fail-fast approach as we quickly see there is an issue and can optimize the query to get the best performance, without impacting other workloads. Please take a look at :ref:`Query Optimization 101 ` for strategies to optimize your queries when you encounter this situation. .. _reading-lots-of-records: ************************* Reading lots of records ************************* When the HTTP endpoint is used CrateDB will prepare the entire response in memory before sending it to the client. When the PostgreSQL protocol is used CrateDB attempts to stream the results but in many cases it still needs to bring all rows to the query handler node first. So we should always limit how many rows we request at a time, see `Fetching large result sets from CrateDB`_. .. _number-of=shards: ****************** Number of shards ****************** In CrateDB data in tables and partitions is distributed in storage units that we call shards. If we do not specify how many shards we want for a table/partition CrateDB will derive a default from the number of nodes. CrateDB also has replicas of data and this results in additional shards in the cluster. Having too many or too few shards has performance implications, so it is very important to get familiar with the :ref:`Sharding Performance Guide `. But in particular, there is a soft limit of 1000 shards per node; so table schemas, partitioning strategy, and number of nodes need to be planned to stay well below this limit, one strategy can be to aim for a configuration where even if one node in the cluster is lost the remaining nodes would still have less than 1000 shards. If this was not considered when initially defining the tables we have the following considerations: - changing the partitioning strategy requires creating a new table and copying over the data - the easiest way to change the number of shards on a partitioned table is to do it for new shards only with the ``ALTER TABLE ONLY`` command - see also `Changing the number of shards`_ .. _amount-of-indexed-columns: ************************************* Number of indexed fields in OBJECTs ************************************* ``OBJECT`` columns are ``DYNAMIC`` by default and CrateDB indexes all their fields, providing excellent query performance without requiring manual indexing. However, excessive indexing can impact storage, write speed, and resource utilization. - All fields in OBJECTs are automatically indexed when inserted. - CrateDB optimizes indexing using Lucene-based columnar storage. - A soft limit of 1,000 total indexed columns and OBJECT fields per table exists. - Going beyond this limit may impact performance. In cases with many fields and columns, it is advised to determine if some OBJECTs or nested parts of them need to be indexed, and use the `ignored column policy`_ where applicable. .. _section-joins: ******* JOINs ******* CrateDB is a lot better at JOINs than many of our competitors and is getting better at every release, but JOINs in distributed databases are tricky to optimize, so in many cases queries involving JOINs may need a bit of tweaking. See `Using common table expressions to speed up queries`_ .. _changing the number of shards: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/alter-table.html#alter-shard-number .. _fetching large result sets from cratedb: https://community.cratedb.com/t/fetching-large-result-sets-from-cratedb/1270 .. _ignored column policy: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/data-types.html#ignored .. _using common table expressions to speed up queries: https://community.cratedb.com/t/using-common-table-expressions-to-speed-up-queries/1719.. _autogenerated_sequences_performance: ########################################################### Autogenerated sequences and PRIMARY KEY values in CrateDB ########################################################### As you begin working with CrateDB, you might be puzzled why CrateDB does not have a built-in, auto-incrementing "serial" data type as PostgreSQL or MySQL. As a distributed database, designed to scale horizontally, CrateDB needs as many operations as possible to complete independently on each node without any coordination between nodes. Maintaining a global auto-increment value requires that a node checks with other nodes before allocating a new value. This bottleneck would be hindering our ability to achieve `extremely fast ingestion speeds`_. That said, there are many alternatives available and we can also implement true consistent/synchronized sequences if we want to. ************************************ Using a timestamp as a primary key ************************************ This option involves declaring a column as follows: .. code:: psql BIGINT DEFAULT now() PRIMARY KEY :Pros: Always increasing number - ideal if we need to timestamp records creation anyway :Cons: gaps between the numbers, not suitable if we may have more than one record on the same millisecond ************* Using UUIDs ************* This option involves declaring a column as follows: .. code:: psql TEXT DEFAULT gen_random_text_uuid() PRIMARY KEY :Pros: Globally unique, no risk of conflicts if merging things from different tables/environments :Cons: No order guarantee. Not as human-friendly as numbers. String format may not be applicable to cover all scenarios. Range queries are not possible. ************************ Use UUIDv7 identifiers ************************ `Version 7 UUIDs`_ are a relatively new kind of UUIDs which feature a time-ordered value. We can use these in CrateDB with an UDF_ with the code from `UUIDv7 in N languages`_. :Pros: Same as `gen_random_text_uuid` above but almost sequential, which enables range queries. :Cons: not as human-friendly as numbers and slight performance impact from UDF use ********************************* Use IDs from an external system ********************************* In cases where data is imported into CrateDB from external systems that employ identifier governance, CrateDB does not need to generate any identifier values and primary key values can be inserted as-is from the source system. See `Replicating data from other databases to CrateDB with Debezium and Kafka`_ for an example. ********************* Implement sequences ********************* This approach involves a table to keep the latest values that have been consumed and client side code to keep it up-to-date in a way that guarantees unique values even when many ingestion processes run in parallel. :Pros: Can have any arbitrary type of sequences, (we may for instance want to increment values by 10 instead of 1 - prefix values with a year number - combine numbers and letters - etc) :Cons: Need logic for the optimistic update implemented client-side, the sequences table becomes a bottleneck so not suitable for high-velocity ingestion scenarios We will first create a table to keep the latest values for our sequences: .. code:: psql CREATE TABLE sequences ( name TEXT PRIMARY KEY, last_value BIGINT ) CLUSTERED INTO 1 SHARDS; We will then initialize it with one new sequence at 0: .. code:: psql INSERT INTO sequences (name,last_value) VALUES ('mysequence',0); And we are going to do an example with a new table defined as follows: .. code:: psql CREATE TABLE mytable ( id BIGINT PRIMARY KEY, field1 TEXT ); The Python code below reads the last value used from the sequences table, and then attempts an `optimistic UPDATE`_ with a ``RETURNING`` clause, if a contending process already consumed the identity nothing will be returned so our process will retry until a value is returned, then it uses that value as the new ID for the record we are inserting into the ``mytable`` table. .. code:: python # /// script # requires-python = ">=3.8" # dependencies = [ # "records", # "sqlalchemy-cratedb", # ] # /// import time import records db = records.Database("crate://") sequence_name = "mysequence" max_retries = 5 base_delay = 0.1 # 100 milliseconds for attempt in range(max_retries): select_query = """ SELECT last_value, _seq_no, _primary_term FROM sequences WHERE name = :sequence_name; """ row = db.query(select_query, sequence_name=sequence_name).first() new_value = row.last_value + 1 update_query = """ UPDATE sequences SET last_value = :new_value WHERE name = :sequence_name AND _seq_no = :seq_no AND _primary_term = :primary_term RETURNING last_value; """ if ( str( db.query( update_query, new_value=new_value, sequence_name=sequence_name, seq_no=row._seq_no, primary_term=row._primary_term, ).all() ) != "[]" ): break delay = base_delay * (2**attempt) print(f"Attempt {attempt + 1} failed. Retrying in {delay:.1f} seconds...") time.sleep(delay) else: raise Exception(f"Failed after {max_retries} retries with exponential backoff") insert_query = "INSERT INTO mytable (id, field1) VALUES (:id, :field1)" db.query(insert_query, id=new_value, field1="abc") db.close() .. _extremely fast ingestion speeds: https://cratedb.com/blog/how-we-scaled-ingestion-to-one-million-rows-per-second .. _optimistic update: https://cratedb.com/docs/crate/reference/en/latest/general/occ.html#optimistic-update .. _replicating data from other databases to cratedb with debezium and kafka: https://cratedb.com/blog/replicating-data-from-other-databases-to-cratedb-with-debezium-and-kafka .. _udf: https://cratedb.com/docs/crate/reference/en/latest/general/user-defined-functions.html .. _uuidv7 in n languages: https://github.com/nalgeon/uuidv7/blob/main/src/uuidv7.cratedb .. _version 7 uuids: https://datatracker.ietf.org/doc/html/rfc9562#name-uuid-version-7
.. _sql: ========== SQL syntax ========== You can use :ref:`Structured Query Language ` (SQL) to query your data. This section of the documentation provides a complete SQL syntax reference for CrateDB. .. NOTE:: For introductions to CrateDB functionality, we recommend you consult the appropriate top-level section of the documentation. The SQL syntax reference assumes a basic familiarity with the relevant parts of CrateDB. .. SEEALSO:: :ref:`General use: Data definition ` :ref:`General use: Data manipulation ` :ref:`General use: Querying ` :ref:`General use: Built-in functions and operators ` .. toctree:: :maxdepth: 2 general/index statements/index.. highlight:: psql .. _scalar-functions: .. _builtins-scalar: ================ Scalar functions ================ Scalar functions are :ref:`functions ` that return :ref:`scalars `. .. rubric:: Table of contents .. contents:: :local: .. _scalar-string: String functions ================ .. _scalar-concat: ``concat('first_arg', second_arg, [ parameter , ... ])`` -------------------------------------------------------- Concatenates a variable number of arguments into a single string. It ignores ``NULL`` values. Returns: ``text`` :: cr> select concat('foo', null, 'bar') AS col; +--------+ | col | +--------+ | foobar | +--------+ SELECT 1 row in set (... sec) You can also use the ``||`` :ref:`operator `:: cr> select 'foo' || 'bar' AS col; +--------+ | col | +--------+ | foobar | +--------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``concat`` function regarding the handling of ``NULL`` arguments. It will return ``NULL`` if any of the operands is ``NULL`` while the ``concat`` scalar will return an empty string if both arguments are ``NULL`` and the non-null argument otherwise. .. TIP:: The ``concat`` function can also be used for merging objects: :ref:`concat(object, object) `. .. _scalar-concat-ws: ``concat_ws('separator', second_arg, [ parameter , ... ])`` ------------------------------------------------------------------------------ Concatenates a variable number of arguments into a single string using a separator defined by the first argument. If first argument is ``NULL`` the return value is ``NULL``. Remaining ``NULL`` arguments are ignored. Returns: ``text`` :: cr> select concat_ws(',','foo', null, 'bar') AS col; +---------+ | col | +---------+ | foo,bar | +---------+ SELECT 1 row in set (... sec) .. _scalar-format: ``format('format_string', parameter, [ parameter , ... ])`` ----------------------------------------------------------- Formats a string similar to the C function ``printf``. For details about the format string syntax, see `formatter`_ Returns: ``text`` :: cr> select format('%s.%s', schema_name, table_name) AS fqtable ... from sys.shards ... where table_name = 'locations' ... limit 1; +---------------+ | fqtable | +---------------+ | doc.locations | +---------------+ SELECT 1 row in set (... sec) :: cr> select format('%tY', date) AS year ... from locations ... group by format('%tY', date) ... order by 1; +------+ | year | +------+ | 1979 | | 2013 | +------+ SELECT 2 rows in set (... sec) .. _scalar-substr: ``substr('string', from, [ count ])`` ------------------------------------- Extracts a part of a string. ``from`` specifies where to start and ``count`` the length of the part. Returns: ``text`` :: cr> select substr('crate.io', 3, 2) AS substr; +--------+ | substr | +--------+ | at | +--------+ SELECT 1 row in set (... sec) ``substr('string' FROM 'pattern')`` ----------------------------------- Extract a part from a string that matches a POSIX regular expression pattern. Returns: ``text``. If the pattern contains groups specified via parentheses it returns the first matching group. If the pattern doesn't match, the function returns ``NULL``. :: cr> SELECT ... substring('2023-08-07', '[a-z]') as no_match, ... substring('2023-08-07', '\d{4}-\d{2}-\d{2}') as full_date, ... substring('2023-08-07', '\d{4}-(\d{2})-\d{2}') as month; +----------+------------+-------+ | no_match | full_date | month | +----------+------------+-------+ | NULL | 2023-08-07 | 08 | +----------+------------+-------+ SELECT 1 row in set (... sec) .. _scalar-substring: ``substring(...)`` ------------------ Alias for :ref:`scalar-substr`. .. _scalar-char_length: ``char_length('string')`` ------------------------- Counts the number of characters in a string. Returns: ``integer`` :: cr> select char_length('crate.io') AS char_length; +-------------+ | char_length | +-------------+ | 8 | +-------------+ SELECT 1 row in set (... sec) Each character counts only once, regardless of its byte size. :: cr> select char_length('©rate.io') AS char_length; +-------------+ | char_length | +-------------+ | 8 | +-------------+ SELECT 1 row in set (... sec) .. _scalar-length: ``length(text)`` ---------------- Returns the number of characters in a string. The same as :ref:`char_length `. .. _scalar-bit_length: ``bit_length('string')`` ------------------------ Counts the number of bits in a string. Returns: ``integer`` .. NOTE:: CrateDB uses UTF-8 encoding internally, which uses between 1 and 4 bytes per character. :: cr> select bit_length('crate.io') AS bit_length; +------------+ | bit_length | +------------+ | 64 | +------------+ SELECT 1 row in set (... sec) :: cr> select bit_length('©rate.io') AS bit_length; +------------+ | bit_length | +------------+ | 72 | +------------+ SELECT 1 row in set (... sec) .. _scalar-octet_length: ``octet_length('string')`` -------------------------- Counts the number of bytes (octets) in a string. Returns: ``integer`` :: cr> select octet_length('crate.io') AS octet_length; +--------------+ | octet_length | +--------------+ | 8 | +--------------+ SELECT 1 row in set (... sec) :: cr> select octet_length('©rate.io') AS octet_length; +--------------+ | octet_length | +--------------+ | 9 | +--------------+ SELECT 1 row in set (... sec) .. _scalar-ascii: ``ascii(string)`` ----------------- Returns the ASCII code of the first character. For UTF-8, returns the Unicode code point of the characters. Returns: ``int`` :: cr> SELECT ascii('a') AS a, ascii('🎈') AS b; +----+--------+ | a | b | +----+--------+ | 97 | 127880 | +----+--------+ SELECT 1 row in set (... sec) .. _scalar-chr: ``chr(int)`` ------------ Returns the character with the given code. For UTF-8 the argument is treated as a Unicode code point. Returns: ``string`` :: cr> SELECT chr(65) AS a; +---+ | a | +---+ | A | +---+ SELECT 1 row in set (... sec) .. _scalar-lower: ``lower('string')`` ------------------- Converts all characters to lowercase. ``lower`` does not perform locale-sensitive or context-sensitive mappings. Returns: ``text`` :: cr> select lower('TransformMe') AS lower; +-------------+ | lower | +-------------+ | transformme | +-------------+ SELECT 1 row in set (... sec) .. _scalar-upper: ``upper('string')`` ------------------- Converts all characters to uppercase. ``upper`` does not perform locale-sensitive or context-sensitive mappings. Returns: ``text`` :: cr> select upper('TransformMe') as upper; +-------------+ | upper | +-------------+ | TRANSFORMME | +-------------+ SELECT 1 row in set (... sec) .. _scalar-initcap: ``initcap('string')`` --------------------- Converts the first letter of each word to upper case and the rest to lower case (*capitalize letters*). Returns: ``text`` :: cr> select initcap('heLlo WORLD') AS initcap; +-------------+ | initcap | +-------------+ | Hello World | +-------------+ SELECT 1 row in set (... sec) .. _scalar-sha1: ``sha1('string')`` ------------------ Returns: ``text`` Computes the SHA1 checksum of the given string. :: cr> select sha1('foo') AS sha1; +------------------------------------------+ | sha1 | +------------------------------------------+ | 0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33 | +------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-md5: ``md5('string')`` ----------------- Returns: ``text`` Computes the MD5 checksum of the given string. See :ref:`sha1 ` for an example. .. _scalar-replace: ``replace(text, from, to)`` --------------------------- Replaces all occurrences of ``from`` in ``text`` with ``to``. :: cr> select replace('Hello World', 'World', 'Stranger') AS hello; +----------------+ | hello | +----------------+ | Hello Stranger | +----------------+ SELECT 1 row in set (... sec) .. _scalar-translate: ``translate(string, from, to)`` ------------------------------- Performs several single-character, one-to-one translation in one operation. It translates ``string`` by replacing the characters in the ``from`` set, one-to-one positionally, with their counterparts in the ``to`` set. If ``from`` is longer than ``to``, the function removes the occurrences of the extra characters in ``from``. If there are repeated characters in ``from``, only the first mapping is considered. Synopsis:: translate(string, from, to) Examples:: cr> select translate('Crate', 'Ct', 'Dk') as translation; +-------------+ | translation | +-------------+ | Drake | +-------------+ SELECT 1 row in set (... sec) :: cr> select translate('Crate', 'rCe', 'c') as translation; +-------------+ | translation | +-------------+ | cat | +-------------+ SELECT 1 row in set (... sec) .. _scalar-trim: ``trim({LEADING | TRAILING | BOTH} 'str_arg_1' FROM 'str_arg_2')`` ------------------------------------------------------------------ Removes the longest string containing characters from ``str_arg_1`` (``' '`` by default) from the start, end, or both ends (``BOTH`` is the default) of ``str_arg_2``. If any of the two strings is ``NULL``, the result is ``NULL``. Synopsis:: trim([ [ {LEADING | TRAILING | BOTH} ] [ str_arg_1 ] FROM ] str_arg_2) Examples:: cr> select trim(BOTH 'ab' from 'abcba') AS trim; +------+ | trim | +------+ | c | +------+ SELECT 1 row in set (... sec) :: cr> select trim('ab' from 'abcba') AS trim; +------+ | trim | +------+ | c | +------+ SELECT 1 row in set (... sec) :: cr> select trim(' abcba ') AS trim; +-------+ | trim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-ltrim: ``ltrim(text, [ trimmingText ])`` --------------------------------- Removes set of characters which are matching ``trimmingText`` (``' '`` by default) to the left of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select ltrim('xxxzzzabcba', 'xz') AS ltrim; +-------+ | ltrim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-rtrim: ``rtrim(text, [ trimmingText ])`` --------------------------------- Removes set of characters which are matching ``trimmingText`` (``' '`` by default) to the right of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select rtrim('abcbaxxxzzz', 'xz') AS rtrim; +-------+ | rtrim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-btrim: ``btrim(text, [ trimmingText ])`` --------------------------------- A combination of :ref:`ltrim ` and :ref:`rtrim `, removing the longest string matching ``trimmingText`` from both the start and end of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select btrim('XXHelloXX', 'XX') AS btrim; +-------+ | btrim | +-------+ | Hello | +-------+ SELECT 1 row in set (... sec) .. _scalar-quote_ident: ``quote_ident(text)`` --------------------- Returns: ``text`` Quotes a provided string argument. Quotes are added only if necessary. For example, if the string contains non-identifier characters, keywords, or would be case-folded. Embedded quotes are properly doubled. The quoted string can be used as an identifier in an SQL statement. :: cr> select pg_catalog.quote_ident('Column name') AS quoted; +---------------+ | quoted | +---------------+ | "Column name" | +---------------+ SELECT 1 row in set (... sec) .. _scalar-left: ``left('string', len)`` ----------------------- Returns the first ``len`` characters of ``string`` when ``len`` > 0, otherwise all but last ``len`` characters. Synopsis:: left(string, len) Examples:: cr> select left('crate.io', 5) AS col; +-------+ | col | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) :: cr> select left('crate.io', -3) AS col; +-------+ | col | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-right: ``right('string', len)`` ------------------------ Returns the last ``len`` characters in ``string`` when ``len`` > 0, otherwise all but first ``len`` characters. Synopsis:: right(string, len) Examples:: cr> select right('crate.io', 2) AS col; +-----+ | col | +-----+ | io | +-----+ SELECT 1 row in set (... sec) :: cr> select right('crate.io', -6) AS col; +-----+ | col | +-----+ | io | +-----+ SELECT 1 row in set (... sec) .. _scalar-lpad: ``lpad('string1', len[, 'string2'])`` ------------------------------------- Fill up ``string1`` to length ``len`` by prepending the characters ``string2`` (a space by default). If ``string1`` is already longer than ``len`` then it is truncated (on the right). Synopsis:: lpad(string1, len[, string2]) Example:: cr> select lpad(' I like CrateDB!!', 41, 'yes! ') AS col; +-------------------------------------------+ | col | +-------------------------------------------+ | yes! yes! yes! yes! yes! I like CrateDB!! | +-------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-rpad: ``rpad('string1', len[, 'string2'])`` ------------------------------------- Fill up ``string1`` to length ``len`` by appending the characters ``string2`` (a space by default). If string1 is already longer than ``len`` then it is truncated. Synopsis:: rpad(string1, len[, string2]) Example:: cr> select rpad('Do you like Crate?', 38, ' yes!') AS col; +----------------------------------------+ | col | +----------------------------------------+ | Do you like Crate? yes! yes! yes! yes! | +----------------------------------------+ SELECT 1 row in set (... sec) .. NOTE:: In both cases, the scalar functions ``lpad`` and ``rpad`` do now accept a length greater than 50000. .. _scalar-encode: ``encode(bytea, format)`` ------------------------- Encode takes a binary string (``hex`` format) and returns a text encoding using the specified format. Supported formats are: ``base64``, ``hex``, and ``escape``. The ``escape`` format replaces unprintable characters with octal byte notation like ``\nnn``. For the reverse function, see :ref:`decode() `. Synopsis:: encode(string1, format) Example:: cr> select encode(E'123\b\t56', 'base64') AS col; +--------------+ | col | +--------------+ | MTIzCAk1Ng== | +--------------+ SELECT 1 row in set (... sec) .. _scalar-decode: ``decode(text, format)`` ------------------------- Decodes a text encoded string using the specified format and returns a binary string (``hex`` format). Supported formats are: ``base64``, ``hex``, and ``escape``. For the reverse function, see :ref:`encode() `. Synopsis:: decode(text1, format) Example:: cr> select decode('T\214', 'escape') AS col; +--------+ | col | +--------+ | \x548c | +--------+ SELECT 1 row in set (... sec) .. _scalar-repeat: ``repeat(text, integer)`` ------------------------- Repeats a string the specified number of times. If the number of repetitions is equal or less than zero then the function returns an empty string. Returns: ``text`` :: cr> select repeat('ab', 3) AS repeat; +--------+ | repeat | +--------+ | ababab | +--------+ SELECT 1 row in set (... sec) .. _scalar-strpos: ``strpos(string, substring)`` ----------------------------- Returns the first 1-based index of the specified substring within string. Returns zero if the substring is not found and ``NULL`` if any of the arguments is ``NULL``. Returns: ``integer`` :: cr> SELECT strpos('crate' , 'ate'); +---+ | 3 | +---+ | 3 | +---+ SELECT 1 row in set (... sec) .. _scalar-position: ``position(substring in string)`` --------------------------------- The ``position()`` scalar function is an alias of the :ref:`scalar-strpos` scalar function. Note that the order of the arguments is reversed. .. _scalar-reverse: ``reverse(text)`` ------------------ Reverses the order of the string. Returns ``NULL`` if the argument is ``NULL``. Returns: ``text`` :: cr> select reverse('abcde') as reverse; +---------+ | reverse | +---------+ | edcba | +---------+ SELECT 1 row in set (... sec) .. _scalar-split_part: ``split_part(text, text, integer)`` ----------------------------------- Splits a string into parts using a delimiter and returns the part at the given index. The first part is addressed by index ``1``. Special Cases: * Returns the empty string if the index is greater than the number of parts. * If any of the arguments is ``NULL``, the result is ``NULL``. * If the delimiter is the empty string, the input string is considered as consisting of exactly one part. Returns: ``text`` Synopsis:: split_part(string, delimiter, index) Example:: cr> select split_part('ab--cdef--gh', '--', 2) AS part; +------+ | part | +------+ | cdef | +------+ SELECT 1 row in set (... sec) .. _scalar-parse_uri: ``parse_uri(text)`` ----------------------------------- Returns: ``object`` Parses the given URI string and returns an object containing the various components of the URI. The returned object has the following properties:: "uri" OBJECT AS ( "scheme" TEXT, "userinfo" TEXT, "hostname" TEXT, "port" INT, "path" TEXT, "query" TEXT, "fragment" TEXT ) .. csv-table:: :header: "URI Component", "Description" :widths: 25, 75 :align: left ``scheme`` , "The scheme of the URI (e.g. ``http``, ``crate``, etc.)" ``userinfo`` , "The decoded user-information component of this URI." ``hostname`` , "The hostname or IP address specified in the URI." ``port`` , "The port number specified in the URI" ``path`` , "The decoded path specified in the URI." ``query`` , "The decoded query string specified in the URI" ``fragment`` , "The query string specified in the URI" .. NOTE:: For URI properties not specified in the input string, ``null`` is returned. Synopsis:: parse_uri(text) Example:: cr> SELECT parse_uri('crate://my_user@cluster.crate.io:5432/doc?sslmode=verify-full') as uri; +------------------------------------------------------------------------------------------------------------------------------------------------------------+ | uri | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"fragment": null, "hostname": "cluster.crate.io", "path": "/doc", "port": 5432, "query": "sslmode=verify-full", "scheme": "crate", "userinfo": "my_user"} | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) If you just want to select a specific URI component, you can use the bracket notation on the returned object:: cr> SELECT parse_uri('crate://my_user@cluster.crate.io:5432')['hostname'] as uri_hostname; +------------------+ | uri_hostname | +------------------+ | cluster.crate.io | +------------------+ SELECT 1 row in set (... sec) .. _scalar-parse_url: ``parse_url(text)`` ----------------------------------- Returns: ``object`` Parses the given URL string and returns an object containing the various components of the URL. The returned object has the following properties:: "url" OBJECT AS ( "scheme" TEXT, "userinfo" TEXT, "hostname" TEXT, "port" INT, "path" TEXT, "query" TEXT, "parameters" OBJECT AS ( "key1" ARRAY(TEXT), "key2" ARRAY(TEXT) ), "fragment" TEXT ) .. csv-table:: :header: "URL Component", "Description" :widths: 25, 75 :align: left ``scheme`` , "The scheme of the URL (e.g. ``https``, ``crate``, etc.)" ``userinfo`` , "The decoded user-information component of this URL." ``hostname`` , "The hostname or IP address specified in the URL." ``port`` , "The port number specified in the URL. If no port number is specified, the default port for the given scheme will be used." ``path`` , "The decoded path specified in the URL." ``query`` , "The decoded query string specified in the URL." ``parameters`` , "For each query parameter included in the URL, the ``parameter`` property holds an object property that stores an array of decoded text values for that specific query parameter." ``fragment`` , "The decoded fragment specified in the URL" .. NOTE:: For URL properties not specified in the input string, ``null`` is returned. Synopsis:: parse_url(text) Example:: cr> SELECT parse_url('https://my_user@cluster.crate.io:8000/doc?sslmode=verify-full') as url; +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | url | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"fragment": null, "hostname": "cluster.crate.io", "parameters": {"sslmode": ["verify-full"]}, "path": "/doc", "port": 8000, "query": "sslmode=verify-full", "scheme": "https", "userinfo": "my_user"} | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) If you just want to select a specific URL component, you can use the bracket notation on the returned object:: cr> SELECT parse_url('https://my_user@cluster.crate.io:5432')['hostname'] as url_hostname; +------------------+ | url_hostname | +------------------+ | cluster.crate.io | +------------------+ SELECT 1 row in set (... sec) Parameter values are always treated as ``text``. There is no conversion of comma-separated parameter values into arrays:: cr> SELECT parse_url('http://crate.io?p1=1,2,3&p1=a&p2[]=1,2,3')['parameters'] as params; +-------------------------------------------+ | params | +-------------------------------------------+ | {"p1": ["1,2,3", "a"], "p2[]": ["1,2,3"]} | +-------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-date-time: Date and time functions ======================= .. _scalar-date_trunc: ``date_trunc('interval', ['timezone',] timestamp)`` --------------------------------------------------- Returns: ``timestamp with time zone`` Limits a timestamps precision to a given interval. Valid intervals are: * ``second`` * ``minute`` * ``hour`` * ``day`` * ``week`` * ``month`` * ``quarter`` * ``year`` Valid values for ``timezone`` are either the name of a time zone (for example 'Europe/Vienna') or the UTC offset of a time zone (for example '+01:00'). To get a complete overview of all possible values take a look at the `available time zones`_ supported by `Joda-Time`_. The following example shows how to use the ``date_trunc`` function to generate a day based histogram in the ``Europe/Moscow`` timezone:: cr> select ... date_trunc('day', 'Europe/Moscow', date) as day, ... count(*) as num_locations ... from locations ... group by 1 ... order by 1; +---------------+---------------+ | day | num_locations | +---------------+---------------+ | 308523600000 | 4 | | 1367352000000 | 1 | | 1373918400000 | 8 | +---------------+---------------+ SELECT 3 rows in set (... sec) If you don't specify a time zone, ``truncate`` uses UTC time:: cr> select date_trunc('day', date) as day, count(*) as num_locations ... from locations ... group by 1 ... order by 1; +---------------+---------------+ | day | num_locations | +---------------+---------------+ | 308534400000 | 4 | | 1367366400000 | 1 | | 1373932800000 | 8 | +---------------+---------------+ SELECT 3 rows in set (... sec) .. _date-bin: ``date_bin(interval, timestamp, origin)`` ----------------------------------------- ``date_bin`` "bins" the input timestamp to the specified interval, aligned with a specified origin. ``interval`` is an expression of type ``interval``. ``Timestamp`` and ``origin`` are expressions of type ``timestamp with time zone`` or ``timestamp without time zone``. The return type matches the timestamp and origin types and will be either ``timestamp with time zone`` or ``timestamp without time zone``. The return value marks the beginning of the bin into which the input timestamp is placed. If you use an interval with a single unit like ``1 second`` or ``1 minute``, this function returns the same result as :ref:`date_trunc `. Intervals with months and/or year units are not allowed. If the interval is ``1 week``, ``date_bin`` only returns the same result as ``date_trunc`` if the origin is a Monday. If at least one argument is ``NULL``, the return value is ``NULL``. The interval cannot be zero. Negative intervals are allowed and are treated the same as positive intervals. Intervals having month or year units are not supported due to varying length of those units. A timestamp can be binned to an interval of arbitrary length aligned with a custom origin. Examples: :: cr> SELECT date_bin('2 hours'::INTERVAL, ts, ... '2021-01-01T05:00:00Z'::TIMESTAMP) as bin, ... date_format('%y-%m-%d %H:%i', ... date_bin('2 hours'::INTERVAL, ts, '2021-01-01T05:00:00Z'::TIMESTAMP)) ... formatted_bin ... FROM unnest(ARRAY[ ... '2021-01-01T08:30:10Z', ... '2021-01-01T08:38:10Z', ... '2021-01-01T18:18:10Z', ... '2021-01-01T18:18:10Z' ... ]::TIMESTAMP[]) as tbl (ts); +---------------+----------------+ | bin | formatted_bin | +---------------+----------------+ | 1609484400000 | 21-01-01 07:00 | | 1609484400000 | 21-01-01 07:00 | | 1609520400000 | 21-01-01 17:00 | | 1609520400000 | 21-01-01 17:00 | +---------------+----------------+ SELECT 4 rows in set (... sec) .. TIP:: 0 can be used as a shortcut for Unix zero as the origin:: cr> select date_bin('2 hours' :: INTERVAL, ... '2021-01-01T08:30:10Z' :: timestamp without time ZONE, 0) as bin; +---------------+ | bin | +---------------+ | 1609488000000 | +---------------+ SELECT 1 row in set (... sec) Please note, that implicit cast treats numbers as is, i.e. as a timestamp in that zone and if timestamp is in non-UTC zone you might want to set numeric origin to the same zone:: cr> select date_bin('4 hours' :: INTERVAL, ... '2020-01-01T09:00:00+0200'::timestamp with time zone, ... TIMEZONE('+02:00', 0)) as bin; +---------------+ | bin | +---------------+ | 1577858400000 | +---------------+ SELECT 1 row in set (... sec) .. _scalar-extract: ``extract(field from source)`` ------------------------------ ``extract`` is a special :ref:`expression ` that translates to a function which retrieves subcolumns such as day, hour or minute from a timestamp or an interval. The return type depends on the used ``field``. Example with timestamp:: cr> select extract(day from '2014-08-23') AS day; +-----+ | day | +-----+ | 23 | +-----+ SELECT 1 row in set (... sec) Example with interval:: cr> select extract(hour from INTERVAL '5 days 12 hours 45 minutes') AS hour; +------+ | hour | +------+ | 12 | +------+ SELECT 1 row in set (... sec) Synopsis:: EXTRACT( field FROM source ) ``field`` An identifier or string literal which identifies the part of the timestamp or interval that should be extracted. ``source`` An expression that resolves to an interval, or a timestamp (with or without timezone), or is castable to a timestamp. .. NOTE:: When extracting from an :ref:`INTERVAL ` there is normalization of units, up to days e.g.:: cr> SELECT extract(day from INTERVAL '14 years 1250 days 49 hours') AS days; +------+ | days | +------+ | 1252 | +------+ SELECT 1 row in set (... sec) The following fields are supported: ``CENTURY`` | *Return type:* ``integer`` | century of era Returns the ISO representation which is a straight split of the date. Year 2000 century 20 and year 2001 is also century 20. This is different to the GregorianJulian (GJ) calendar system where 2001 would be century 21. ``YEAR`` | *Return type:* ``integer`` | the year field ``QUARTER`` | *Return type:* ``integer`` | the quarter of the year (1 - 4) ``MONTH`` | *Return type:* ``integer`` | the month of the year ``WEEK`` | *Return type:* ``integer`` | the week of the year ``DAY`` | *Return type:* ``integer`` | the day of the month for timestamps, days for intervals ``DAY_OF_MONTH`` | *Return type:* ``integer`` | same as ``day`` ``DAY_OF_WEEK`` | *Return type:* ``integer`` | day of the week. Starting with Monday (1) to Sunday (7) ``DOW`` | *Return type:* ``integer`` | same as ``day_of_week`` ``DAY_OF_YEAR`` | *Return type:* ``integer`` | the day of the year (1 - 365 / 366) ``DOY`` | *Return type:* ``integer`` | same as ``day_of_year`` ``HOUR`` | *Return type:* ``integer`` | the hour field ``MINUTE`` | *Return type:* ``integer`` | the minute field ``SECOND`` | *Return type:* ``integer`` | the second field ``EPOCH`` | *Return type:* ``double precision`` | The number of seconds since Jan 1, 1970. | Can be negative if earlier than Jan 1, 1970. .. _scalar-current_time: ``CURRENT_TIME`` ---------------- The ``CURRENT_TIME`` :ref:`expression ` returns the time in microseconds since midnight UTC at the time the SQL statement was handled. Clock time is looked up at most once within the scope of a single query, to ensure that multiple occurrences of ``CURRENT_TIME`` :ref:`evaluate ` to the same value. Synopsis:: CURRENT_TIME [ ( precision ) ] ``precision`` Must be a positive integer between 0 and 6. The default value is 6. It determines the number of fractional seconds to output. A value of 0 means the time will have second precision, no fractional seconds (microseconds) are given. .. NOTE:: No guarantee is provided about the accuracy of the underlying clock, results may be limited to millisecond precision, depending on the system. .. _scalar-current_timestamp: ``CURRENT_TIMESTAMP`` --------------------- The ``CURRENT_TIMESTAMP`` expression returns the timestamp in milliseconds since midnight UTC at the time the SQL statement was handled. Therefore, the same timestamp value is returned for every invocation of a single statement. Synopsis:: CURRENT_TIMESTAMP [ ( precision ) ] ``precision`` Must be a positive integer between ``0`` and ``3``. The default value is ``3``. This value determines the number of fractional seconds to output. A value of ``0`` means the timestamp will have second precision, no fractional seconds (milliseconds) are given. .. TIP:: To get an offset value of ``CURRENT_TIMESTAMP`` (e.g., this same time one day ago), you can add or subtract an :ref:`interval `, like so:: CURRENT_TIMESTAMP - '1 day'::interval .. NOTE:: If the ``CURRENT_TIMESTAMP`` function is used in :ref:`ddl-generated-columns` it behaves slightly different in ``UPDATE`` operations. In such a case the actual timestamp of each row update is returned. .. _scalar-curdate: ``CURDATE()`` ---------------- The ``CURDATE()`` scalar function is an alias of the :ref:`scalar-current_date` expression. Synopsis:: CURDATE() .. _scalar-current_date: ``CURRENT_DATE`` ---------------- The ``CURRENT_DATE`` expression returns the date in UTC timezone at the time the SQL statement was handled. Clock time is looked up at most once within the scope of a single query, to ensure that multiple occurrences of ``CURRENT_DATE`` evaluate to the same value. Synopsis:: CURRENT_DATE .. _scalar-now: ``now()`` --------- Returns the current date and time in UTC. This is the same as ``current_timestamp`` Returns: ``timestamp with time zone`` Synopsis:: now() .. _scalar-date_format: ``date_format([format_string, [timezone,]] timestamp)`` ------------------------------------------------------- The ``date_format`` function formats a timestamp as string according to the (optional) format string. Returns: ``text`` Synopsis:: DATE_FORMAT( [ format_string, [ timezone, ] ] timestamp ) The only mandatory argument is the ``timestamp`` value to format. It can be any :ref:`expression ` that is safely convertible to timestamp data type with or without timezone. The syntax for the ``format_string`` is 100% compatible to the syntax of the `MySQL date_format`_ function. For reference, the format is listed in detail below: .. csv-table:: :header: "Format Specifier", "Description" ``%a``, "Abbreviated weekday name (Sun..Sat)" ``%b``, "Abbreviated month name (Jan..Dec)" ``%c``, "Month in year, numeric (0..12)" ``%D``, "Day of month as ordinal number (1st, 2nd, ... 24th)" ``%d``, "Day of month, padded to 2 digits (00..31)" ``%e``, "Day of month (0..31)" ``%f``, "Microseconds, padded to 6 digits (000000..999999)" ``%H``, "Hour in 24-hour clock, padded to 2 digits (00..23)" ``%h``, "Hour in 12-hour clock, padded to 2 digits (01..12)" ``%I``, "Hour in 12-hour clock, padded to 2 digits (01..12)" ``%i``, "Minutes, numeric (00..59)" ``%j``, "Day of year, padded to 3 digits (001..366)" ``%k``, "Hour in 24-hour clock (0..23)" ``%l``, "Hour in 12-hour clock (1..12)" ``%M``, "Month name (January..December)" ``%m``, "Month in year, numeric, padded to 2 digits (00..12)" ``%p``, "AM or PM" ``%r``, "Time, 12-hour (``hh:mm:ss`` followed by AM or PM)" ``%S``, "Seconds, padded to 2 digits (00..59)" ``%s``, "Seconds, padded to 2 digits (00..59)" ``%T``, "Time, 24-hour (``hh:mm:ss``)" ``%U``, "Week number, Sunday as first day of the week, first week of the year (01) is the one starting in this year, week 00 starts in last year (00..53)" ``%u``, "Week number, Monday as first day of the week, first week of the year (01) is the one with at least 4 days in this year (00..53)" ``%V``, "Week number, Sunday as first day of the week, first week of the year (01) is the one starting in this year, uses the week number of the last year, if the week started in last year (01..53)" ``%v``, "Week number, Monday as first day of the week, first week of the year (01) is the one with at least 4 days in this year, uses the week number of the last year, if the week started in last year (01..53)" ``%W``, "Weekday name (Sunday..Saturday)" ``%w``, "Day of the week (0=Sunday..6=Saturday)" ``%X``, "Week year, Sunday as first day of the week, numeric, four digits; used with %V" ``%x``, "Week year, Monday as first day of the week, numeric, four digits; used with %v" ``%Y``, "Year, numeric, four digits" ``%y``, "Year, numeric, two digits" ``%%``, "A literal '%' character" ``%x``, "x, for any 'x' not listed above" If no ``format_string`` is given the default format will be used:: %Y-%m-%dT%H:%i:%s.%fZ :: cr> select date_format('1970-01-01') as epoque; +-----------------------------+ | epoque | +-----------------------------+ | 1970-01-01T00:00:00.000000Z | +-----------------------------+ SELECT 1 row in set (... sec) Valid values for ``timezone`` are either the name of a time zone (for example 'Europe/Vienna') or the UTC offset of a time zone (for example '+01:00'). To get a complete overview of all possible values take a look at the `available time zones`_ supported by `Joda-Time`_. The ``timezone`` will be ``UTC`` if not provided:: cr> select date_format('%W the %D of %M %Y %H:%i %p', 0) as epoque; +-------------------------------------------+ | epoque | +-------------------------------------------+ | Thursday the 1st of January 1970 00:00 AM | +-------------------------------------------+ SELECT 1 row in set (... sec) :: cr> select date_format('%Y/%m/%d %H:%i', 'EST', 0) as est_epoque; +------------------+ | est_epoque | +------------------+ | 1969/12/31 19:00 | +------------------+ SELECT 1 row in set (... sec) .. _scalar-timezone: ``timezone(timezone, timestamp)`` --------------------------------- The timezone scalar function converts values of ``timestamp`` without time zone to/from timestamp with time zone. Synopsis:: TIMEZONE(timezone, timestamp) It has two variants depending on the type of ``timestamp``: .. csv-table:: :header: "Type of timestamp", "Return Type", "Description" "timestamp without time zone OR bigint", "timestamp with time zone", "Treat given timestamp without time zone as located in the specified timezone" "timestamp with time zone", "timestamp without time zone", "Convert given timestamp with time zone to the new timezone with no time zone designation" :: cr> select ... 257504400000 as no_tz, ... date_format( ... '%Y-%m-%d %h:%i', 257504400000 ... ) as no_tz_str, ... timezone( ... 'Europe/Madrid', 257504400000 ... ) as in_madrid, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', 257504400000 ... ) ... ) as in_madrid_str; +--------------+------------------+--------------+------------------+ | no_tz | no_tz_str | in_madrid | in_madrid_str | +--------------+------------------+--------------+------------------+ | 257504400000 | 1978-02-28 09:00 | 257500800000 | 1978-02-28 08:00 | +--------------+------------------+--------------+------------------+ SELECT 1 row in set (... sec) :: cr> select ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp with time zone ... ) as epoque, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp with time zone ... ) ... ) as epoque_str; +--------------+------------------+ | epoque | epoque_str | +--------------+------------------+ | 257508000000 | 1978-02-28 10:00 | +--------------+------------------+ SELECT 1 row in set (... sec) :: cr> select ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp without time zone ... ) as epoque, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp without time zone ... ) ... ) as epoque_str; +--------------+------------------+ | epoque | epoque_str | +--------------+------------------+ | 257504400000 | 1978-02-28 09:00 | +--------------+------------------+ SELECT 1 row in set (... sec) .. _scalar-to_char: ``to_char(expression, format_string)`` -------------------------------------- The ``to_char`` function converts a ``timestamp`` or ``interval`` value to a string, based on a given format string. Returns: ``text`` Synopsis:: TO_CHAR( expression, format_string ) Here, ``expression`` can be any value with the type of ``timestamp`` (with or without a timezone) or ``interval``. The syntax for the ``format_string`` differs based the type of the :ref:`expression `. For ``timestamp`` expressions, the ``format_string`` is a template string containing any of the following symbols: +-----------------------+-----------------------------------------------------+ | Pattern | Description | +=======================+=====================================================+ | ``HH`` / ``HH12`` | Hour of day (01-12) | +-----------------------+-----------------------------------------------------+ | ``HH24`` | Hour of day (00-23) | +-----------------------+-----------------------------------------------------+ | ``MI`` | Minute (00-59) | +-----------------------+-----------------------------------------------------+ | ``SS`` | Second (00-59) | +-----------------------+-----------------------------------------------------+ | ``MS`` | Millisecond (000-999) | +-----------------------+-----------------------------------------------------+ | ``US`` | Microsecond (000000-999999) | +-----------------------+-----------------------------------------------------+ | ``FF1`` | Tenth of second (0-9) | +-----------------------+-----------------------------------------------------+ | ``FF2`` | Hundredth of second (00-99) | +-----------------------+-----------------------------------------------------+ | ``FF3`` | Millisecond (000-999) | +-----------------------+-----------------------------------------------------+ | ``FF4`` | Tenth of millisecond (0000-9999) | +-----------------------+-----------------------------------------------------+ | ``FF5`` | Hundredth of millisecond (00000-99999) | +-----------------------+-----------------------------------------------------+ | ``FF6`` | Microsecond (000000-999999) | +-----------------------+-----------------------------------------------------+ | ``SSSS`` / ``SSSSS`` | Seconds past midnight (0-86399) | +-----------------------+-----------------------------------------------------+ | ``AM`` / ``am`` / | Meridiem indicator | | ``PM`` / ``pm`` | | +-----------------------+-----------------------------------------------------+ | ``A.M.`` / ``a.m.`` / | Meridiem indicator (with periods) | | ``P.M.`` / ``p.m.`` | | +-----------------------+-----------------------------------------------------+ | ``Y,YYY`` | 4 digit year with comma | +-----------------------+-----------------------------------------------------+ | ``YYYY`` | 4 digit year | +-----------------------+-----------------------------------------------------+ | ``yyyy`` | 4 digit year | +-----------------------+-----------------------------------------------------+ | ``YYY`` | Last 3 digits of year | +-----------------------+-----------------------------------------------------+ | ``YY`` | Last 2 digits of year | +-----------------------+-----------------------------------------------------+ | ``Y`` | Last digit of year | +-----------------------+-----------------------------------------------------+ | ``IYYY`` | 4 digit ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``IYY`` | Last 3 digits of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``IY`` | Last 2 digits of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``I`` | Last digit of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``BC`` / ``bc`` / | Era indicator | | ``AD`` / ``ad`` | | +-----------------------+-----------------------------------------------------+ | ``B.C.`` / ``b.c.`` / | Era indicator with periods | | ``A.D.`` / ``a.d.`` | | +-----------------------+-----------------------------------------------------+ | ``MONTH`` / ``Month`` | Full month name (uppercase, capitalized, lowercase) | | / ``month`` | padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``MON`` / ``Mon`` / | Short month name (uppercase, capitalized, | | ``mon`` | lowercase) padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``MM`` | Month number (01-12) | +-----------------------+-----------------------------------------------------+ | ``DAY`` / ``Day`` / | Full day name (uppercase, capitalized, lowercase) | | ``day`` | padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``DY`` / ``Dy`` / | Short, 3 character day name | | ``dy`` | (uppercase, capitalized, lowercase) | +-----------------------+-----------------------------------------------------+ | ``DDD`` | Day of year (001-366) | +-----------------------+-----------------------------------------------------+ | ``IDDD`` | Day of ISO-8601 week-numbering year, where the | | | first Monday of the first ISO week is day 1 | | | (001-371) | +-----------------------+-----------------------------------------------------+ | ``DD`` | Day of month (01-31) | +-----------------------+-----------------------------------------------------+ | ``D`` | Day of the week, from Sunday (1) to Saturday (7) | +-----------------------+-----------------------------------------------------+ | ``ID`` | ISO-8601 day of the week, from Monday (1) to Sunday | | | (7) | +-----------------------+-----------------------------------------------------+ | ``W`` | Week of month (1-5) | +-----------------------+-----------------------------------------------------+ | ``WW`` | Week number of year (1-53) | +-----------------------+-----------------------------------------------------+ | ``IW`` | Week number of ISO-8601 week-numbering year (01-53) | +-----------------------+-----------------------------------------------------+ | ``CC`` | Century | +-----------------------+-----------------------------------------------------+ | ``J`` | Julian Day | +-----------------------+-----------------------------------------------------+ | ``Q`` | Quarter | +-----------------------+-----------------------------------------------------+ | ``RM`` / ``rm`` | Month in Roman numerals (uppercase, lowercase) | +-----------------------+-----------------------------------------------------+ | ``TZ`` / ``tz`` | Time-zone abbreviation (uppercase, lowercase) | +-----------------------+-----------------------------------------------------+ | ``TZH`` | Time-zone hours | +-----------------------+-----------------------------------------------------+ | ``TZM`` | Time-zone minutes | +-----------------------+-----------------------------------------------------+ | ``OF`` | Time-zone offset from UTC | +-----------------------+-----------------------------------------------------+ Example:: cr> select ... to_char( ... timestamp '1970-01-01T17:31:12', ... 'Day, Month DD - HH12:MI AM YYYY AD' ... ) as ts; +-----------------------------------------+ | ts | +-----------------------------------------+ | Thursday, January 01 - 05:31 PM 1970 AD | +-----------------------------------------+ SELECT 1 row in set (... sec) For ``interval`` expressions, the formatting string accepts the same tokens as ``timestamp`` expressions. The function then uses the timestamp of the specified interval added to the timestamp of ``0000/01/01 00:00:00``:: cr> select ... to_char( ... interval '1 year 3 weeks 200 minutes', ... 'YYYY MM DD HH12:MI:SS' ... ) as interval; +---------------------+ | interval | +---------------------+ | 0001 01 22 03:20:00 | +---------------------+ SELECT 1 row in set (... sec) .. _scalar-pg-age: ``age([timestamp,] timestamp)`` --------------------------------------------------- Returns: :ref:`interval ` between 2 timestamps. Second argument is subtracted from the first one. If at least one argument is ``NULL``, the return value is ``NULL``. If only one timestamp is given, the return value is interval between current_date (at midnight) and the given timestamp. Example:: cr> select pg_catalog.age('2021-10-21'::timestamp, '2021-10-20'::timestamp) ... as age; +----------------+ | age | +----------------+ | 1 day 00:00:00 | +----------------+ SELECT 1 row in set (... sec) cr> select pg_catalog.age(date_trunc('day', CURRENT_DATE)) as age; +----------+ | age | +----------+ | 00:00:00 | +----------+ SELECT 1 row in set (... sec) .. _scalar-geo: Geo functions ============= .. _scalar-distance: ``distance(geo_point1, geo_point2)`` ------------------------------------ Returns: ``double precision`` The ``distance`` function can be used to calculate the distance between two points on earth. It uses the `Haversine formula`_ which gives great-circle distances between 2 points on a sphere based on their latitude and longitude. The return value is the distance in meters. Below is an example of the distance function where both points are specified using WKT. See :ref:`data-types-geo` for more information on the implicit type casting of geo points:: cr> select distance('POINT (10 20)', 'POINT (11 21)') AS col; +-------------------+ | col | +-------------------+ | 152354.3209044634 | +-------------------+ SELECT 1 row in set (... sec) This scalar function can always be used in both the ``WHERE`` and ``ORDER BY`` clauses. With the limitation that one of the arguments must be a literal and the other argument must be a column reference. .. NOTE:: The algorithm of the calculation which is used when the distance function is used as part of the result column list has a different precision than what is stored inside the index which is utilized if the distance function is part of a WHERE clause. For example, if ``select distance(...)`` returns 0.0, an equality check with ``where distance(...) = 0`` might not yield anything at all due to the precision difference. .. _scalar-within: ``within(shape1, shape2)`` -------------------------- Returns: ``boolean`` The ``within`` function returns true if ``shape1`` is within ``shape2``. If that is not the case false is returned. ``shape1`` can either be a ``geo_shape`` or a ``geo_point``. ``shape2`` must be a ``geo_shape``. Below is an example of the ``within`` function which makes use of the implicit type casting from strings in WKT representation to geo point and geo shapes:: cr> select within( ... 'POINT (10 10)', ... 'POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))' ... ) AS is_within; +-----------+ | is_within | +-----------+ | TRUE | +-----------+ SELECT 1 row in set (... sec) This function can always be used within the ``WHERE`` clause. .. _scalar-intersects: ``intersects(geo_shape, geo_shape)`` ------------------------------------ Returns: ``boolean`` The ``intersects`` function returns true if both argument shapes share some points or area, they *overlap*. This also includes two shapes where one lies :ref:`within ` the other. If ``false`` is returned, both shapes are considered *disjoint*. Example:: cr> select ... intersects( ... {type='Polygon', coordinates=[ ... [[13.4252, 52.7096],[13.9416, 52.0997], ... [12.7221, 52.1334],[13.4252, 52.7096]]]}, ... 'LINESTRING(13.9636 52.6763, 13.2275 51.9578, ... 12.9199 52.5830, 11.9970 52.6830)' ... ) as intersects, ... intersects( ... {type='Polygon', coordinates=[ ... [[13.4252, 52.7096],[13.9416, 52.0997], ... [12.7221, 52.1334],[13.4252, 52.7096]]]}, ... 'LINESTRING (11.0742 49.4538, 11.5686 48.1367)' ... ) as disjoint; +------------+----------+ | intersects | disjoint | +------------+----------+ | TRUE | FALSE | +------------+----------+ SELECT 1 row in set (... sec) Due to a limitation on the :ref:`data-types-geo-shape` datatype this function cannot be used in the :ref:`ORDER BY ` clause. .. _scalar-latitude-longitude: ``latitude(geo_point)`` and ``longitude(geo_point)`` ---------------------------------------------------- Returns: ``double precision`` The ``latitude`` and ``longitude`` function return the coordinates of latitude or longitude of a point, or ``NULL`` if not available. The input must be a column of type ``geo_point``, a valid WKT string or a ``double precision`` array. See :ref:`data-types-geo` for more information on the implicit type casting of geo points. Example:: cr> select ... mountain, ... height, ... longitude(coordinates) as "lon", ... latitude(coordinates) as "lat" ... from sys.summits ... order by height desc limit 1; +------------+--------+---------+---------+ | mountain | height | lon | lat | +------------+--------+---------+---------+ | Mont Blanc | 4808 | 6.86444 | 45.8325 | +------------+--------+---------+---------+ SELECT 1 row in set (... sec) Below is an example of the latitude/longitude functions which make use of the implicit type casting from strings to geo point:: cr> select ... latitude('POINT (10 20)') AS lat, ... longitude([10.0, 20.0]) AS long; +------+------+ | lat | long | +------+------+ | 20.0 | 10.0 | +------+------+ SELECT 1 row in set (... sec) .. _scalar-geohash: ``geohash(geo_point)`` ---------------------- Returns: ``text`` Returns a `GeoHash `_ representation based on full precision (12 characters) of the input point, or ``NULL`` if not available. The input has to be a column of type ``geo_point``, a valid WKT string or a ``double precision`` array. See :ref:`data-types-geo` for more information of the implicit type casting of geo points. Example:: cr> select ... mountain, ... height, ... geohash(coordinates) as "geohash" ... from sys.summits ... order by height desc limit 1; +------------+--------+--------------+ | mountain | height | geohash | +------------+--------+--------------+ | Mont Blanc | 4808 | u0huspw99j1r | +------------+--------+--------------+ SELECT 1 row in set (... sec) .. _scalar-area: ``area(geo_shape)`` ---------------------- Returns: ``double precision`` The ``area`` function calculates the area of the input shape in square-degrees. The calculation will use geospatial awareness (AKA `geodetic`_) instead of `Euclidean geometry`_. The input has to be a column of type :ref:`data-types-geo-shape`, a valid `WKT`_ string or `GeoJSON`_. See :ref:`data-types-geo-shape` for more information. Below you can find an example. Example:: cr> select ... round(area('POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))')) as "area"; +------+ | area | +------+ | 25 | +------+ SELECT 1 row in set (... sec) .. _scalar-math: Mathematical functions ====================== All mathematical functions can be used within ``WHERE`` and ``ORDER BY`` clauses. .. _scalar-abs: ``abs(number)`` --------------- Returns the absolute value of the given number in the datatype of the given number. Example:: cr> select abs(214748.0998) AS a, abs(0) AS b, abs(-214748) AS c; +-------------+---+--------+ | a | b | c | +-------------+---+--------+ | 214748.0998 | 0 | 214748 | +-------------+---+--------+ SELECT 1 row in set (... sec) .. _scalar-sign: ``sign(number)`` ---------------- Returns the sign of a number. This function will return one of the following: - If number > 0, it returns 1.0 - If number = 0, it returns 0.0 - If number < 0, it returns -1.0 - If number is NULL, it returns NULL The data type of the return value is ``numeric`` if the argument is ``numeric`` and ``double precision`` for the rest of numeric types. For example:: cr> select sign(12.34) as a, sign(0) as b, sign (-77) as c, sign(NULL) as d; +-----+-----+------+------+ | a | b | c | d | +-----+-----+------+------+ | 1.0 | 0.0 | -1.0 | NULL | +-----+-----+------+------+ SELECT 1 row in set (... sec) .. _scalar-ceil: ``ceil(number)`` ---------------- Returns the smallest integral value that is not less than the argument. Returns: ``numeric``, ``bigint`` or ``integer`` Return value will be of type ``numeric`` if the input value is of ``numeric`` type, with the same precision and scale as the input type. It will be of ``integer`` if the input value is an ``integer``` or ``float```. If the input value is of type ``bigint`` or ``double precision`` the return value will be of type ``bigint``. Example:: cr> select ceil(29.9) AS col; +-----+ | col | +-----+ | 30 | +-----+ SELECT 1 row in set (... sec) .. _scalar-ceiling: ``ceiling(number)`` ------------------- This is an alias for :ref:`ceil `. .. _scalar-degrees: ``degrees(double precision)`` ----------------------------- Convert the given ``radians`` value to ``degrees``. Returns: ``double precision`` :: cr> select degrees(0.5) AS degrees; +-------------------+ | degrees | +-------------------+ | 28.64788975654116 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-exp: ``exp(number)`` --------------- Returns Euler's number ``e`` raised to the power of the given numeric value. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: > select exp(1.0) AS exp; +-------------------+ | exp | +-------------------+ | 2.718281828459045 | +-------------------+ SELECT 1 row in set (... sec) .. test skipped because java.lang.Math.exp() can return with different precision on different CPUs (e.g.: Apple M1) .. _scalar-floor: ``floor(number)`` ----------------- Returns the largest integral value that is not less than the argument. Returns: ``numeric``, ``bigint`` or ``integer`` Return value will be of type ``numeric`` if the input value is of ``numeric`` type, with the same precision and scale as the input type. It will be of ``integer`` if the input value is an ``integer``` or ``float```. If the input value is of type ``bigint`` or ``double precision`` the return value will be of type ``bigint``. Example:: cr> select floor(29.9) AS floor; +-------+ | floor | +-------+ | 29 | +-------+ SELECT 1 row in set (... sec) .. _scalar-ln: ``ln(number)`` -------------- Returns the natural logarithm of given ``number``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT ln(1) AS ln; +-----+ | ln | +-----+ | 0.0 | +-----+ SELECT 1 row in set (... sec) .. NOTE:: An error is returned for arguments which lead to undefined or illegal results. E.g. ln(0) results in ``minus infinity``, and therefore, an error is returned. .. _scalar-log: ``log(x : number[, b : number])`` --------------------------------- Returns the logarithm of given ``x`` to base ``b``. Returns: ``numeric`` or ``double precision`` When the second argument (``b``) is provided it returns a value of type ``double precision``, even if ``x`` is of type ``numeric``, as it's implicitly casted to ``double precision`` (thus, possibly loosing precision). When it's not provided, then the return value will be of type ``numeric`` with unspecified precision and scale, if the input value is of ``numeric`` type and of `double precision`` for any other arithmetic type. Examples:: cr> SELECT log(100, 10) AS log; +-----+ | log | +-----+ | 2.0 | +-----+ SELECT 1 row in set (... sec) The second argument (``b``) is optional. If not present, base 10 is used:: cr> SELECT log(100) AS log; +-----+ | log | +-----+ | 2.0 | +-----+ SELECT 1 row in set (... sec) .. NOTE:: An error is returned for arguments which lead to undefined or illegal results. E.g. log(0) results in ``minus infinity``, and therefore, an error is returned. The same is true for arguments which lead to a ``division by zero``, as, e.g., log(10, 1) does. .. _scalar-modulus: ``modulus(y, x)`` ----------------- Returns the remainder of ``y/x``. Returns: Same as argument types. :: cr> select modulus(5, 4) AS mod; +-----+ | mod | +-----+ | 1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-mod: ``mod(y, x)`` ----------------- This is an alias for :ref:`modulus `. .. _scalar-power: ``power(a: number, b: number)`` ------------------------------- Returns the given argument ``a`` raised to the power of argument ``b``. Returns: ``double precision`` The return type of the power function is always ``double precision``, even when both the inputs are integral types, in order to be consistent across positive and negative exponents (which will yield decimal types). See below for an example:: cr> SELECT power(2,3) AS pow; +-----+ | pow | +-----+ | 8.0 | +-----+ SELECT 1 row in set (... sec) .. _scalar-radians: ``radians(double precision)`` ----------------------------- Convert the given ``degrees`` value to ``radians``. Returns: ``double precision`` :: cr> select radians(45.0) AS radians; +--------------------+ | radians | +--------------------+ | 0.7853981633974483 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-random: ``random()`` ------------ The ``random`` function returns a random value in the range 0.0 <= X < 1.0. Returns: ``double precision`` .. NOTE:: Every call to ``random`` will yield a new random number. .. _scalar-gen_random_text_uuid: ``gen_random_text_uuid()`` -------------------------- Returns a random time based UUID as ``text``. The returned ID is similar to flake IDs and well suited for use as primary key value. Note that the ID is opaque (i.e., not to be considered meaningful in any way) and the implementation is free to change. .. _scalar-round: ``round(number[, precision])`` ------------------------------ Returns ``number`` rounded to the specified ``precision`` (decimal places). When ``precision`` is not specified, the ``round`` function rounds the input value to the closest integer for ``real`` and ``integer`` data types with ties rounding up, and to the closest ``bigint`` value for ``double precision`` and ``bigint`` data types with ties rounding up. When the data type of the argument is ``numeric``, then it returns the closest ``numeric`` value with the same precision and scale as the input type, with all decimal digits zeroed out, and with ties rounding up. When it is specified, the result's type is ``numeric``. If ``number`` is of ``numeric`` datatype, then the ``numeric`` type of the result has the same precision and scale with the input. If it's of any other arithmetic type, the ``numeric`` datatype of the result has unspecified precision and scale. Notice that ``round(number)`` and ``round(number, 0)`` may return different result types. Examples:: cr> select round(42.2) AS round; +-------+ | round | +-------+ | 42 | +-------+ SELECT 1 row in set (... sec) cr> select round(42.21, 1) AS round; +-------+ | round | +-------+ | 42.2 | +-------+ SELECT 1 row in set (... sec) .. _scalar-trunc: ``trunc(number[, precision])`` ------------------------------ Returns ``number`` truncated to the specified ``precision`` (decimal places). When ``precision`` is not specified, the result's type is an ``integer``, or ``bigint``. When it is specified, the result's type is ``double precision``. Notice that ``trunc(number)`` and ``trunc(number, 0)`` return different result types. See below for examples:: cr> select trunc(29.999999, 3) AS trunc; +--------+ | trunc | +--------+ | 29.999 | +--------+ SELECT 1 row in set (... sec) cr> select trunc(29.999999) AS trunc; +-------+ | trunc | +-------+ | 29 | +-------+ SELECT 1 row in set (... sec) .. _scalar-sqrt: ``sqrt(number)`` ---------------- Returns the square root of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> select sqrt(25.0) AS sqrt; +------+ | sqrt | +------+ | 5.0 | +------+ SELECT 1 row in set (... sec) .. _scalar-sin: ``sin(number)`` --------------- Returns the sine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT sin(1) AS sin; +--------------------+ | sin | +--------------------+ | 0.8414709848078965 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-asin: ``asin(number)`` ---------------- Returns the arcsine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT asin(1) AS asin; +--------------------+ | asin | +--------------------+ | 1.5707963267948966 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-cos: ``cos(number)`` --------------- Returns the cosine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT cos(1) AS cos; +--------------------+ | cos | +--------------------+ | 0.5403023058681398 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-acos: ``acos(number)`` ---------------- Returns the arccosine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT acos(-1) AS acos; +-------------------+ | acos | +-------------------+ | 3.141592653589793 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-tan: ``tan(number)`` --------------- Returns the tangent of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT tan(1) AS tan; +--------------------+ | tan | +--------------------+ | 1.5574077246549023 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-cot: ``cot(number)`` --------------- Returns the cotangent of the argument that represents the angle expressed in radians. The range of the argument is all real numbers. The cotangent of zero is undefined and returns ``Infinity``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> select cot(1) AS cot; +--------------------+ | cot | +--------------------+ | 0.6420926159343306 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-atan: ``atan(number)`` ---------------- Returns the arctangent of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT atan(1) AS atan; +--------------------+ | atan | +--------------------+ | 0.7853981633974483 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-atan2: ``atan2(y: number, x: number)`` ------------------------------- Returns the arctangent of ``y/x``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value ``y`` or ``x`` is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT atan2(2, 1) AS atan2; +--------------------+ | atan2 | +--------------------+ | 1.1071487177940904 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-pi: ``pi()`` -------- Returns the π constant. Returns: ``double precision`` :: cr> SELECT pi() AS pi; +-------------------+ | pi | +-------------------+ | 3.141592653589793 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-regexp: Regular expression functions ============================ The :ref:`regular expression ` functions in CrateDB use `Java Regular Expressions`_. See the API documentation for more details. .. NOTE:: Be aware that, in contrast to the functions, the :ref:`regular expression operator ` uses `Lucene Regular Expressions`_. .. _scalar-regexp_replace: ``regexp_replace(source, pattern, replacement [, flags])`` ---------------------------------------------------------- ``regexp_replace`` can be used to replace every (or only the first) occurrence of a subsequence matching ``pattern`` in the ``source`` string with the ``replacement`` string. If no subsequence in ``source`` matches the regular expression ``pattern``, ``source`` is returned unchanged. Returns: ``text`` ``pattern`` is a Java regular expression. For details on the regexp syntax, see `Java Regular Expressions`_. The ``replacement`` string may contain expressions like ``$N`` where ``N`` is a digit between 0 and 9. It references the nth matched group of ``pattern`` and the matching subsequence of that group will be inserted in the returned string. The expression ``$0`` will insert the whole matching ``source``. By default, only the first occurrence of a subsequence matching ``pattern`` will be replaced. If all occurrences shall be replaced use the ``g`` flag. .. _scalar-regexp_replace-flags: Flags ..... ``regexp_replace`` supports a number of flags as optional parameters. These flags are given as a string containing any of the characters listed below. Order does not matter. +-------+---------------------------------------------------------------------+ | Flag | Description | +=======+=====================================================================+ | ``i`` | enable case insensitive matching | +-------+---------------------------------------------------------------------+ | ``u`` | enable unicode case folding when used together with ``i`` | +-------+---------------------------------------------------------------------+ | ``U`` | enable unicode support for character classes like ``\W`` | +-------+---------------------------------------------------------------------+ | ``s`` | make ``.`` match line terminators, too | +-------+---------------------------------------------------------------------+ | ``m`` | make ``^`` and ``$`` match on the beginning or end of a line | | | too. | +-------+---------------------------------------------------------------------+ | ``x`` | permit whitespace and line comments starting with ``#`` | +-------+---------------------------------------------------------------------+ | ``d`` | only ``\n`` is considered a line-terminator when using ``^``, ``$`` | | | and ``.`` | +-------+---------------------------------------------------------------------+ | ``g`` | replace all occurrences of a subsequence matching ``pattern``, | | | not only the first | +-------+---------------------------------------------------------------------+ .. _scalar-regexp_replace-examples: Examples ........ :: cr> select ... name, ... regexp_replace( ... name, '(\w+)\s(\w+)+', '$1 - $2' ... ) as replaced ... from locations ... order by name limit 5; +---------------------+-----------------------+ | name | replaced | +---------------------+-----------------------+ | | | | Aldebaran | Aldebaran | | Algol | Algol | | Allosimanius Syneca | Allosimanius - Syneca | | Alpha Centauri | Alpha - Centauri | +---------------------+-----------------------+ SELECT 5 rows in set (... sec) :: cr> select ... regexp_replace( ... 'alcatraz', '(foo)(bar)+', '$1baz' ... ) as replaced; +----------+ | replaced | +----------+ | alcatraz | +----------+ SELECT 1 row in set (... sec) :: cr> select ... name, ... regexp_replace( ... name, '([A-Z]\w+) .+', '$1', 'ig' ... ) as replaced ... from locations ... order by name limit 5; +---------------------+--------------+ | name | replaced | +---------------------+--------------+ | | | | Aldebaran | Aldebaran | | Algol | Algol | | Allosimanius Syneca | Allosimanius | | Alpha Centauri | Alpha | +---------------------+--------------+ SELECT 5 rows in set (... sec) .. _scalar-arrays: Array functions =============== .. _scalar-array_append: ``array_append(anyarray, value)`` ---------------------------------------- The ``array_append`` function adds the value at the end of the array Returns: ``array`` :: cr> select ... array_append([1,2,3], 4) AS array_append; +--------------+ | array_append | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` to append values to an array:: cr> select ... [1,2,3] || 4 AS array_append; +--------------+ | array_append | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``array_append`` function regarding the handling of ``NULL`` arguments. It will ignore a ``NULL`` value while the ``array_append`` function will append a ``NULL`` value to the array. .. _scalar-array_cat: ``array_cat(first_array, second_array)`` ---------------------------------------- The ``array_cat`` function concatenates two arrays into one array Returns: ``array`` :: cr> select ... array_cat([1,2,3],[3,4,5,6]) AS array_cat; +-----------------------+ | array_cat | +-----------------------+ | [1, 2, 3, 3, 4, 5, 6] | +-----------------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` with arrays:: cr> select ... [1,2,3] || [4,5,6] || [7,8,9] AS arr; +-----------------------------+ | arr | +-----------------------------+ | [1, 2, 3, 4, 5, 6, 7, 8, 9] | +-----------------------------+ SELECT 1 row in set (... sec) .. _scalar-array_unique: ``array_unique(first_array, [ second_array])`` ---------------------------------------------- The ``array_unique`` function merges two arrays into one array with unique elements Returns: ``array`` :: cr> select ... array_unique( ... [1, 2, 3], ... [3, 4, 4] ... ) AS arr; +--------------+ | arr | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) If the arrays have different types all elements will be cast to a common type based on the type precedence. :: cr> select ... array_unique( ... [10, 20], ... [10.0, 20.3] ... ) AS arr; +--------------------+ | arr | +--------------------+ | [10.0, 20.0, 20.3] | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-array_difference: ``array_difference(first_array, second_array)`` ----------------------------------------------- The ``array_difference`` function removes elements from the first array that are contained in the second array. Returns: ``array`` :: cr> select ... array_difference( ... [1,2,3,4,5,6,7,8,9,10], ... [2,3,6,9,15] ... ) AS arr; +---------------------+ | arr | +---------------------+ | [1, 4, 5, 7, 8, 10] | +---------------------+ SELECT 1 row in set (... sec) .. _scalar-array: ``array(subquery)`` ------------------- The ``array(subquery)`` :ref:`expression ` is an array constructor function which operates on the result of the ``subquery``. Returns: ``array`` .. SEEALSO:: :ref:`Array construction with subquery ` .. _scalar-array_upper: ``array_upper(anyarray, dimension)`` ------------------------------------ The ``array_upper`` function returns the number of elements in the requested array dimension (the upper bound of the dimension). CrateDB allows mixing arrays with different sizes on the same dimension. Returns ``NULL`` if array argument is ``NULL`` or if dimension <= 0 or if dimension is ``NULL``. Returns: ``integer`` :: cr> select array_upper([[1, 4], [3]], 1) AS size; +------+ | size | +------+ | 2 | +------+ SELECT 1 row in set (... sec) An empty array has no dimension and returns ``NULL`` instead of ``0``. :: cr> select array_upper(ARRAY[]::int[], 1) AS size; +------+ | size | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-array_length: ``array_length(anyarray, dimension)`` ------------------------------------- An alias for :ref:`scalar-array_upper`. :: cr> select array_length([[1, 4], [3]], 1) AS len; +-----+ | len | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_lower: ``array_lower(anyarray, dimension)`` ------------------------------------ The ``array_lower`` function returns the lower bound of the requested array dimension (which is ``1`` if the dimension is valid and has at least one element). Returns ``NULL`` if array argument is ``NULL`` or if dimension <= 0 or if dimension is ``NULL``. Returns: ``integer`` :: cr> select array_lower([[1, 4], [3]], 1) AS size; +------+ | size | +------+ | 1 | +------+ SELECT 1 row in set (... sec) If there is at least one empty array or ``NULL`` on the requested dimension return value is ``NULL``. Example: :: cr> select array_lower([[1, 4], [3], []], 2) AS size; +------+ | size | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-array_set: ``array_set(array, index, value)`` ---------------------------------- The ``array_set`` function returns the array with the element at ``index`` set to ``value``. Gaps are filled with ``null``. Returns: ``array`` :: cr> select array_set(['_', 'b'], 1, 'a') AS arr; +------------+ | arr | +------------+ | ["a", "b"] | +------------+ SELECT 1 row in set (... sec) ``array_set(source_array, indexes_array, values_array)`` -------------------------------------------------------- Second overload for ``array_set`` that updates many indices with many values at once. Depending on the indexes provided, ``array_set`` updates or appends the values and also fills any gaps with ``nulls``. Returns: ``array`` :: cr> select array_set(['_', 'b'], [1, 4], ['a', 'd']) AS arr; +-----------------------+ | arr | +-----------------------+ | ["a", "b", null, "d"] | +-----------------------+ SELECT 1 row in set (... sec) .. NOTE:: Updating indexes less than or equal to 0 is not supported. .. _scalar-array_slice: ``array_slice(anyarray, from, to)`` ----------------------------------- The ``array_slice`` function returns a slice of the given array using the given lower and upper bound. Returns: ``array`` .. SEEALSO:: :ref:`Accessing arrays` :: cr> select array_slice(['a', 'b', 'c', 'd'], 2, 3) AS arr; +------------+ | arr | +------------+ | ["b", "c"] | +------------+ SELECT 1 row in set (... sec) .. NOTE:: The first index value is ``1``. The maximum array index is ``2147483647``. Both the ``from`` and ``to`` index values are inclusive. Using an index greater than the array size results in an empty array. .. _scalar-array_to_string: ``array_to_string(anyarray, separator, [ null_string ])`` --------------------------------------------------------- The ``array_to_string`` function concatenates elements of the given array into a single string using the ``separator``. Returns: ``text`` :: cr> select ... array_to_string( ... ['Arthur', 'Ford', 'Trillian'], ',' ... ) AS str; +----------------------+ | str | +----------------------+ | Arthur,Ford,Trillian | +----------------------+ SELECT 1 row in set (... sec) If the ``separator`` argument is ``NULL``, the result is ``NULL``:: cr> select ... array_to_string( ... ['Arthur', 'Ford', 'Trillian'], NULL ... ) AS str; +------+ | str | +------+ | NULL | +------+ SELECT 1 row in set (... sec) If ``null_string`` is provided and is not ``NULL``, then ``NULL`` elements of the array are replaced by that string, otherwise they are omitted:: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',', 'Ford' ... ) AS str; +----------------------+ | str | +----------------------+ | Arthur,Ford,Trillian | +----------------------+ SELECT 1 row in set (... sec) :: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',' ... ) AS str; +-----------------+ | str | +-----------------+ | Arthur,Trillian | +-----------------+ SELECT 1 row in set (... sec) :: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',', NULL ... ) AS str; +-----------------+ | str | +-----------------+ | Arthur,Trillian | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array: ``string_to_array(string, separator, [ null_string ])`` ------------------------------------------------------- The ``string_to_array`` splits a string into an array of ``text`` elements using a supplied separator and an optional null-string to set matching substring elements to NULL. Returns: ``array(text)`` :: cr> select string_to_array('Arthur,Ford,Trillian', ',') AS arr; +--------------------------------+ | arr | +--------------------------------+ | ["Arthur", "Ford", "Trillian"] | +--------------------------------+ SELECT 1 row in set (... sec) :: cr> select string_to_array('Arthur,Ford,Trillian', ',', 'Ford') AS arr; +------------------------------+ | arr | +------------------------------+ | ["Arthur", null, "Trillian"] | +------------------------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array-separator: ``separator`` ............. If the ``separator`` argument is NULL, each character of the input string becomes a separate element in the resulting array. :: cr> select string_to_array('Ford', NULL) AS arr; +----------------------+ | arr | +----------------------+ | ["F", "o", "r", "d"] | +----------------------+ SELECT 1 row in set (... sec) If the separator is an empty string, then the entire input string is returned as a one-element array. :: cr> select string_to_array('Arthur,Ford', '') AS arr; +-----------------+ | arr | +-----------------+ | ["Arthur,Ford"] | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array-null_string: ``null_string`` ............... If the ``null_string`` argument is omitted or NULL, none of the substrings of the input will be replaced by NULL. .. _scalar-array_min: ``array_min(array)`` -------------------- The ``array_min`` function returns the smallest element in ``array``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any of the :ref:`primitive types `. :: cr> SELECT array_min([3, 2, 1]) AS min; +-----+ | min | +-----+ | 1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_position: ``array_position(anycompatiblearray, anycompatible [, integer ] ) → integer`` ----------------------------------------------------------------------------- The ``array_position`` function returns the position of the first occurrence of the second argument in the ``array``, or ``NULL`` if it's not present. If the third argument is given, the search begins at that position. The third argument is ignored if it's null. If not within the ``array`` range, ``NULL`` is returned. It is also possible to search for ``NULL`` values. :: cr> SELECT array_position([1,3,7,4], 7) as position; +----------+ | position | +----------+ | 3 | +----------+ SELECT 1 row in set (... sec) Begin the search from given position (optional). :: cr> SELECT array_position([1,3,7,4], 7, 2) as position; +----------+ | position | +----------+ | 3 | +----------+ SELECT 1 row in set (... sec) .. TIP:: When searching for the existence of an ``array`` element, using the :ref:`ANY ` operator inside the ``WHERE`` clause is much more efficient as it can utilize the index whereas ``array_position`` won't even when used inside the ``WHERE`` clause. .. _scalar-array_prepend: ``array_prepend(value, anyarray)`` ---------------------------------- The ``array_prepend`` function prepends a value to the beginning of the array. Returns: ``array`` :: cr> select ... array_prepend(1, [2,3,4]) AS array_prepend; +---------------+ | array_prepend | +---------------+ | [1, 2, 3, 4] | +---------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` to prepend values to an array:: cr> select ... 1 || [2,3,4] AS array_prepend; +---------------+ | array_prepend | +---------------+ | [1, 2, 3, 4] | +---------------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``array_prepend`` function regarding the handling of ``NULL`` arguments. It will ignore a ``NULL`` value while the ``array_prepend`` function will prepend a ``NULL`` value to the array. .. _scalar-array_max: ``array_max(array)`` -------------------- The ``array_max`` function returns the largest element in ``array``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any of the :ref:`primitive types `. :: cr> SELECT array_max([1,2,3]) AS max; +-----+ | max | +-----+ | 3 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_sum: ``array_sum(array)`` -------------------- Returns the sum of array elements that are not ``NULL``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any :ref:`numeric types `. For ``real`` and ``double precison`` arguments, the return type is equal to the argument type. For ``char``, ``smallint``, ``integer``, and ``bigint`` arguments, the return type changes to ``bigint``. If any ``bigint`` value exceeds range limits (-2^64 to 2^64-1), an ``ArithmeticException`` will be raised. :: cr> SELECT array_sum([1,2,3]) AS sum; +-----+ | sum | +-----+ | 6 | +-----+ SELECT 1 row in set (... sec) The sum on the bigint array will result in an overflow in the following query: :: cr> SELECT ... array_sum( ... [9223372036854775807, 9223372036854775807] ... ) as sum; ArithmeticException[long overflow] To address the overflow of the sum of the given array elements, we cast the array to the numeric data type: :: cr> SELECT ... array_sum( ... [9223372036854775807, 9223372036854775807]::numeric[] ... ) as sum; +----------------------+ | sum | +----------------------+ | 18446744073709551614 | +----------------------+ SELECT 1 row in set (... sec) .. _scalar-array_avg: ``array_avg(array)`` -------------------- Returns the average of all values in ``array`` that are not ``NULL`` If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any :ref:`numeric types `. For ``real`` and ``double precison`` arguments, the return type is equal to the argument type. For ``char``, ``smallint``, ``integer``, and ``bigint`` arguments, the return type is ``numeric``. :: cr> SELECT array_avg([1,2,3]) AS avg; +-----+ | avg | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_unnest: ``array_unnest(nested_array)`` ------------------------------ Takes a nested array and returns a flattened array. Only flattens one level at a time. Returns ``NULL`` if the argument is ``NULL``. ``NULL`` array elements are skipped and ``NULL`` leaf elements within arrays are preserved. :: cr> SELECT array_unnest([[1, 2], [3, 4, 5]]) AS result; +-----------------+ | result | +-----------------+ | [1, 2, 3, 4, 5] | +-----------------+ SELECT 1 row in set (... sec) cr> SELECT array_unnest([[1, null, 2], null, [3, 4, 5]]) AS result; +-----------------------+ | result | +-----------------------+ | [1, null, 2, 3, 4, 5] | +-----------------------+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`UNNEST table function ` .. _scalar-null-or-empty-array: ``null_or_empty(array)`` ------------------------- The ``null_or_empty(array)`` function returns a Boolean indicating if an array is ``NULL`` or empty (``[]``). This can serve as a faster alternative to ``IS NULL`` if matching on empty array is acceptable. It makes better use of indices. :: cr> SELECT null_or_empty([]) w, ... null_or_empty([[]]) x, ... null_or_empty(NULL) y, ... null_or_empty([1]) z; +------+-------+------+-------+ | w | x | y | z | +------+-------+------+-------+ | TRUE | FALSE | TRUE | FALSE | +------+-------+------+-------+ SELECT 1 row in set (... sec) .. _scalar-objects: Object functions ================ .. _scalar-object_keys: ``object_keys(object)`` ----------------------- The ``object_keys`` function returns the set of first level keys of an ``object``. Returns: ``array(text)`` :: cr> SELECT ... object_keys({a = 1, b = {c = 2}}) AS object_keys; +-------------+ | object_keys | +-------------+ | ["a", "b"] | +-------------+ SELECT 1 row in set (... sec) .. _scalar-concat-object: ``concat(object, object)`` -------------------------- The ``concat(object, object)`` function combines two objects into a new object containing the union of their first level properties, taking the second object's values for duplicate properties. If one of the objects is ``NULL``, the function returns the non-``NULL`` object. If both objects are ``NULL``, the function returns ``NULL``. Returns: ``object`` :: cr> SELECT ... concat({a = 1}, {a = 2, b = {c = 2}}) AS object_concat; +-------------------------+ | object_concat | +-------------------------+ | {"a": 2, "b": {"c": 2}} | +-------------------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` with objects:: cr> SELECT ... {a = 1} || {b = 2} || {c = 3} AS object_concat; +--------------------------+ | object_concat | +--------------------------+ | {"a": 1, "b": 2, "c": 3} | +--------------------------+ SELECT 1 row in set (... sec) .. NOTE:: ``concat(object, object)`` does not operate recursively: only the top-level object structure is merged:: cr> SELECT ... concat({a = {b = 4}}, {a = {c = 2}}) as object_concat; +-----------------+ | object_concat | +-----------------+ | {"a": {"c": 2}} | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-null-or-empty-object: ``null_or_empty(object)`` ------------------------- The ``null_or_empty(object)`` function returns a Boolean indicating if an object is ``NULL`` or empty (``{}``). This can serve as a faster alternative to ``IS NULL`` if matching on empty objects is acceptable. It makes better use of indices. :: cr> SELECT null_or_empty({}) x, null_or_empty(NULL) y, null_or_empty({x=10}) z; +------+------+-------+ | x | y | z | +------+------+-------+ | TRUE | TRUE | FALSE | +------+------+-------+ SELECT 1 row in set (... sec) .. _scalar-conditional-fn-exp: Conditional functions and expressions ===================================== .. _scalar-case-when-then-end: ``CASE WHEN ... THEN ... END`` ------------------------------ The ``case`` :ref:`expression ` is a generic conditional expression similar to if/else statements in other programming languages and can be used wherever an expression is valid. :: CASE WHEN condition THEN result [WHEN ...] [ELSE result] END Each *condition* expression must result in a boolean value. If the condition's result is true, the value of the *result* expression that follows the condition will be the final result of the ``case`` expression and the subsequent ``when`` branches will not be processed. If the condition's result is not true, any subsequent ``when`` clauses are examined in the same manner. If no ``when`` condition yields true, the value of the ``case`` expression is the result of the ``else`` clause. If the ``else`` clause is omitted and no condition is true, the result is null. .. Hidden: create table case_example cr> create table case_example (id bigint); CREATE OK, 1 row affected (... sec) cr> insert into case_example (id) values (0),(1),(2),(3); INSERT OK, 4 rows affected (... sec) cr> refresh table case_example REFRESH OK, 1 row affected (... sec) Example:: cr> select id, ... case when id = 0 then 'zero' ... when id % 2 = 0 then 'even' ... else 'odd' ... end as parity ... from case_example order by id; +----+--------+ | id | parity | +----+--------+ | 0 | zero | | 1 | odd | | 2 | even | | 3 | odd | +----+--------+ SELECT 4 rows in set (... sec) As a variant, a ``case`` expression can be written using the *simple* form:: CASE expression WHEN value THEN result [WHEN ...] [ELSE result] END Example:: cr> select id, ... case id when 0 then 'zero' ... when 1 then 'one' ... else 'other' ... end as description ... from case_example order by id; +----+-------------+ | id | description | +----+-------------+ | 0 | zero | | 1 | one | | 2 | other | | 3 | other | +----+-------------+ SELECT 4 rows in set (... sec) .. NOTE:: All *result* expressions must be convertible to a single data type. .. Hidden: drop table case_example cr> drop table case_example; DROP OK, 1 row affected (... sec) .. _scalar-if: ``if(condition, result [, default])`` ------------------------------------- The ``if`` function is a conditional function comparing to *if* statements of most other programming languages. If the given *condition* :ref:`expression ` :ref:`evaluates ` to ``true``, the *result* expression is evaluated and its value is returned. If the *condition* evaluates to ``false``, the *result* expression is not evaluated and the optional given *default* expression is evaluated instead and its value will be returned. If the *default* argument is omitted, ``NULL`` will be returned instead. .. Hidden: create table if_example cr> create table if_example (id bigint); CREATE OK, 1 row affected (... sec) cr> insert into if_example (id) values (0),(1),(2),(3); INSERT OK, 4 rows affected (... sec) cr> refresh table if_example REFRESH OK, 1 row affected (... sec) :: cr> select ... id, ... if(id = 0, 'zero', 'other') as description ... from if_example ... order by id; +----+-------------+ | id | description | +----+-------------+ | 0 | zero | | 1 | other | | 2 | other | | 3 | other | +----+-------------+ SELECT 4 rows in set (... sec) .. Hidden: drop table if_example cr> drop table if_example; DROP OK, 1 row affected (... sec) .. _scalar-coalesce: ``coalesce('first_arg', second_arg [, ... ])`` ---------------------------------------------- The ``coalesce`` function takes one or more arguments of the same type and returns the first non-null value of these. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select coalesce(clustered_by, 'nothing') AS clustered_by ... from information_schema.tables ... where table_name='nodes'; +--------------+ | clustered_by | +--------------+ | nothing | +--------------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``coalesce`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-greatest: ``greatest('first_arg', second_arg[ , ... ])`` ---------------------------------------------- The ``greatest`` function takes one or more arguments of the same type and will return the largest value of these. NULL values in the arguments list are ignored. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select greatest(1, 2) AS greatest; +----------+ | greatest | +----------+ | 2 | +----------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``greatest`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-least: ``least('first_arg', second_arg[ , ... ])`` ------------------------------------------- The ``least`` function takes one or more arguments of the same type and will return the smallest value of these. NULL values in the arguments list are ignored. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select least(1, 2) AS least; +-------+ | least | +-------+ | 1 | +-------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``least`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-nullif: ``nullif('first_arg', second_arg)`` ----------------------------------- The ``nullif`` function compares two arguments of the same type and, if they have the same value, returns NULL; otherwise returns the first argument. Returns: same type as arguments :: cr> select nullif(table_schema, 'sys') AS nullif ... from information_schema.tables ... where table_name='nodes'; +--------+ | nullif | +--------+ | NULL | +--------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``nullif`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-sysinfo: System information functions ============================ .. _scalar-current_schema: ``CURRENT_SCHEMA`` ------------------ The ``CURRENT_SCHEMA`` system information function returns the name of the current schema of the session. If no current schema is set, this function will return the default schema, which is ``doc``. Returns: ``text`` The default schema can be set when using the `JDBC client `_ and :ref:`HTTP clients ` such as `CrateDB PDO`_. .. NOTE:: The ``CURRENT_SCHEMA`` function has a special SQL syntax, meaning that it must be called without trailing parenthesis (``()``). However, CrateDB also supports the optional parenthesis. Synopsis:: CURRENT_SCHEMA [ ( ) ] Example:: cr> SELECT CURRENT_SCHEMA; +----------------+ | current_schema | +----------------+ | doc | +----------------+ SELECT 1 row in set (... sec) .. _scalar-current_schemas: ``CURRENT_SCHEMAS(boolean)`` ---------------------------- The ``CURRENT_SCHEMAS()`` system information function returns the current stored schemas inside the :ref:`search_path ` session state, optionally including implicit schemas (e.g. ``pg_catalog``). If no custom :ref:`search_path ` is set, this function will return the default :ref:`search_path ` schemas. Returns: ``array(text)`` Synopsis:: CURRENT_SCHEMAS ( boolean ) Example:: cr> SELECT CURRENT_SCHEMAS(true) AS schemas; +-----------------------+ | schemas | +-----------------------+ | ["pg_catalog", "doc"] | +-----------------------+ SELECT 1 row in set (... sec) .. _scalar-current_user: ``CURRENT_USER`` ---------------- The ``CURRENT_USER`` system information function returns the name of the current connected user or ``crate`` if the user management module is disabled. Returns: ``text`` Synopsis:: CURRENT_USER Example:: cr> select current_user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-current_role: ``CURRENT_ROLE`` ---------------- Equivalent to `CURRENT_USER`_. Returns: ``text`` Synopsis:: CURRENT_ROLE Example:: cr> select current_role AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-user: ``USER`` -------- Equivalent to `CURRENT_USER`_. Returns: ``text`` Synopsis:: USER Example:: cr> select user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-session_user: ``SESSION_USER`` ---------------- The ``SESSION_USER`` system information function returns the name of the current connected user or ``crate`` if the user management module is disabled. Returns: ``text`` Synopsis:: SESSION_USER Example:: cr> select session_user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. NOTE:: CrateDB doesn't currently support the switching of execution context. This makes `SESSION_USER`_ functionally equivalent to `CURRENT_USER`_. We provide it as it's part of the SQL standard. Additionally, the `CURRENT_USER`_, `SESSION_USER`_ and `USER`_ functions have a special SQL syntax, meaning that they must be called without trailing parenthesis (``()``). .. _scalar-has-database-priv: ``has_database_privilege([user,] database, privilege text)`` ------------------------------------------------------------ Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` database name or ``INTEGER`` database OID. .. NOTE:: Only `crate` is valid for database name and only `0` is valid for database OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``CONNECT``, ``CREATE`` and ``TEMP`` or ``TEMPORARY``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. :CONNECT: is ``true`` for all defined users in the database :CREATE: is ``true`` if the user has any ``DDL`` privilege on ``CLUSTER`` or on any ``SCHEMA`` :TEMP: is ``false`` for all users Example:: cr> select has_database_privilege('crate', ' Connect , CREATe ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. _scalar-has-schema-priv: ``has_schema_privilege([user,] schema, privilege text)`` -------------------------------------------------------- Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` schema name or ``INTEGER`` schema OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``CREATE`` and ``USAGE`` which corresponds to CrateDB's ``DDL`` and ``DQL``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. Example:: cr> select has_schema_privilege('pg_catalog', ' Create , UsaGe , CREATe ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. NOTE:: For unknown schemas: - Returns ``TRUE`` for superusers. - For a user with ``DQL`` on cluster scope, returns ``TRUE`` if the privilege type is ``USAGE``. - For a user with ``DML`` on cluster scope, returns ``TRUE`` if the privilege type is ``CREATE``. - Returns ``FALSE`` otherwise. .. _scalar-has-table-priv: ``has_table_privilege([user,] table, privilege text)`` ------------------------------------------------------ Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` table name or ``INTEGER`` table OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``SELECT`` which corresponds to CrateDB's ``DQL`` and ``INSERT``, ``UPDATE``, ``DELETE`` which all correspond to CrateDB's ``DML``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. Example:: cr> select has_table_privilege('sys.summits', ' Select ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. NOTE:: For unknown tables: - Returns ``TRUE`` for superusers. - For a user with ``DQL`` on cluster scope, returns ``TRUE`` if the privilege type is ``SELECT``. - For a user with ``DML`` on cluster scope, returns ``TRUE`` if the privilege type is ``INSERT``, ``UPDATE`` or ``DELETE``. - For a user with ``DQL`` on the schema, returns ``TRUE`` if the privilege type is ``SELECT``. - For a user with ``DML`` on the schema, returns ``TRUE`` if the privilege type is ``INSERT``, ``UPDATE`` or ``DELETE``. - Returns ``FALSE`` otherwise. .. _scalar-pg_backend_pid: ``pg_backend_pid()`` -------------------- The ``pg_backend_pid()`` system information function is implemented for enhanced compatibility with PostgreSQL. CrateDB will always return ``-1`` as there isn't a single process attached to one query. This is different to PostgreSQL, where this represents the process ID of the server process attached to the current session. Returns: ``integer`` Synopsis:: pg_backend_pid() Example:: cr> select pg_backend_pid() AS pid; +-----+ | pid | +-----+ | -1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-pg_postmaster_start_time: ``pg_postmaster_start_time()`` ------------------------------ Returns the server start time as ``timestamp with time zone``. .. _scalar-current_database: ``current_database()`` ---------------------- The ``current_database`` function returns the name of the current database, which in CrateDB will always be ``crate``:: cr> select current_database() AS db; +-------+ | db | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-current_setting: ``current_setting(text [,boolean])`` ------------------------------------ The ``current_setting`` function returns the current value of a :ref:`session setting `. Returns: ``text`` Synopsis:: current_setting(setting_name [, missing_ok]) If no setting exists for ``setting_name``, current_setting throws an error, unless ``missing_ok`` argument is provided and is true. Examples:: cr> select current_setting('search_path') AS search_path; +-------------+ | search_path | +-------------+ | doc | +-------------+ SELECT 1 row in set (... sec) :: cr> select current_setting('foo'); SQLParseException[Unrecognised Setting: foo] :: cr> select current_setting('foo', true) AS foo; +------+ | foo | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_expr: ``pg_get_expr()`` ----------------- The function ``pg_get_expr`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns ``null``. Synopsis:: pg_get_expr(expr text, relation_oid int [, pretty boolean]) Example:: cr> select pg_get_expr('literal', 1) AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_partkeydef: ``pg_get_partkeydef()`` ----------------------- The function ``pg_get_partkeydef`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. Partitioning in CrateDB is different from PostgreSQL, therefore this function always returns ``null``. Synopsis:: pg_get_partkeydef(relation_oid int) Example:: cr> select pg_get_partkeydef(1) AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_serial_sequence: ``pg_get_serial_sequence()`` ---------------------------- The function ``pg_get_serial_sequence`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns ``null``. Existence of tables or columns is not validated. Synopsis:: pg_get_serial_sequence(table_name text, column_name text) Example:: cr> select pg_get_serial_sequence('t1', 'c1') AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_encoding_to_char: ``pg_encoding_to_char()`` ------------------------- The function ``pg_encoding_to_char`` converts an PostgreSQL encoding's internal identifier to a human-readable name. Returns: ``text`` Synopsis:: pg_encoding_to_char(encoding int) Example:: cr> select pg_encoding_to_char(6) AS encoding; +----------+ | encoding | +----------+ | UTF8 | +----------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_userbyid: ``pg_get_userbyid()`` --------------------- The function ``pg_get_userbyid`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns the default CrateDB user for non-null arguments, otherwise, ``null`` is returned. Returns: ``text`` Synopsis:: pg_get_userbyid(id integer) Example:: cr> select pg_get_userbyid(-450373579) AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-pg_typeof: ``pg_typeof()`` --------------- The function ``pg_typeof`` returns the text representation of the value's data type passed to it. Returns: ``text`` Synopsis:: pg_typeof(expression) Example: :: cr> select pg_typeof([1, 2, 3]) as typeof; +---------------+ | typeof | +---------------+ | integer_array | +---------------+ SELECT 1 row in set (... sec) .. _scalar-pg_function_is_visible: ``pg_function_is_visible()`` ---------------------------- The function ``pg_function_is_visible`` returns true for OIDs that refer to a system or a user defined function. Returns: ``boolean`` Synopsis:: pg_function_is_visible(OID) Example: :: cr> select pg_function_is_visible(-919555782) as pg_function_is_visible; +------------------------+ | pg_function_is_visible | +------------------------+ | TRUE | +------------------------+ SELECT 1 row in set (... sec) .. _scalar-pg_table_is_visible: ``pg_table_is_visible()`` ------------------------- The function ``pg_table_is_visible`` accepts an OID as an argument. It returns ``true`` if the current user holds at least one of ``DQL``, ``DDL`` or ``DML`` privilege on the table or view referred by the OID and there are no other tables or views with the same name and privileges but with different schema names appearing earlier in the search path. Returns: ``boolean`` Example: :: cr> select pg_table_is_visible(912037690) as is_visible; +------------+ | is_visible | +------------+ | TRUE | +------------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_function_result: ``pg_get_function_result()`` ---------------------------- The function ``pg_get_function_result`` returns the text representation of the return value's data type of the function referred by the OID. Returns: ``text`` Synopsis:: pg_get_function_result(OID) Example: :: cr> select pg_get_function_result(-919555782) as _pg_get_function_result; +-------------------------+ | _pg_get_function_result | +-------------------------+ | time with time zone | +-------------------------+ SELECT 1 row in set (... sec) .. _scalar-version: ``version()`` ------------- Returns the CrateDB version information. Returns: ``text`` Synopsis:: version() Example: :: cr> select version() AS version; +---------...-+ | version | +---------...-+ | CrateDB ... | +---------...-+ SELECT 1 row in set (... sec) .. _scalar-col_description: ``col_description(integer, integer)`` ------------------------------------- This function exists mainly for compatibility with PostgreSQL. In PostgreSQL, the function returns the comment for a table column. CrateDB doesn't support user defined comments for table columns, so it always returns ``null``. Returns: ``text`` Example: :: cr> SELECT pg_catalog.col_description(1, 1) AS comment; +---------+ | comment | +---------+ | NULL | +---------+ SELECT 1 row in set (... sec) .. _scalar-obj_description: ``obj_description(integer, text)`` ---------------------------------- This function exists mainly for compatibility with PostgreSQL. In PostgreSQL, the function returns the comment for a database object. CrateDB doesn't support user defined comments for database objects, so it always returns ``null``. Returns: ``text`` Example: :: cr> SELECT pg_catalog.obj_description(1, 'pg_type') AS comment; +---------+ | comment | +---------+ | NULL | +---------+ SELECT 1 row in set (... sec) .. _scalar-format_type: ``format_type(integer, integer)`` --------------------------------- Returns the type name of a type. The first argument is the ``OID`` of the type. The second argument is the type modifier. This function exists for PostgreSQL compatibility and the type modifier is always ignored. Returns: ``text`` Example: :: cr> SELECT pg_catalog.format_type(25, null) AS name; +------+ | name | +------+ | text | +------+ SELECT 1 row in set (... sec) If the given ``OID`` is not know, ``???`` is returned:: cr> SELECT pg_catalog.format_type(3, null) AS name; +------+ | name | +------+ | ??? | +------+ SELECT 1 row in set (... sec) .. _scalar-special: Special functions ================= .. _scalar_knn_match: ``knn_match(float_vector, float_vector, int)`` ---------------------------------------------- The ``knn_match`` function uses a k-nearest neighbour (kNN) search algorithm to find vectors that are similar to a query vector. The first argument is the column to search. The second argument is the query vector. The third argument is the number of nearest neighbours to search in the index. Searching a larger number of nearest neighbours is more expensive. There is one index per shard, and on each shard the function will match at most `k` records. To limit the total query result, add a :ref:`LIMIT clause ` to the query. ``knn_match(search_vector, target, k)`` This function must be used within a ``WHERE`` clause targeting a table to use it as a predicate that searches the whole dataset of a table. Using it *outside* of a ``WHERE`` clause, or in a ``WHERE`` clause targeting a virtual table instead of a physical table, results in an error. Similar to the :ref:`MATCH predicate `, this function affects the :ref:`_score ` value. An example:: cr> CREATE TABLE IF NOT EXISTS doc.vectors ( ... xs float_vector(2) ... ); CREATE OK, 1 row affected (... sec) cr> INSERT INTO doc.vectors (xs) ... VALUES ... ([3.14, 8.17]), ... ([14.3, 19.4]); INSERT OK, 2 rows affected (... sec) .. HIDE: cr> REFRESH TABLE doc.vectors; REFRESH OK, 1 row affected (... sec) :: cr> SELECT xs, _score FROM doc.vectors ... WHERE knn_match(xs, [3.14, 8], 2) ... ORDER BY _score DESC; +--------------+--------------+ | xs | _score | +--------------+--------------+ | [3.14, 8.17] | 0.9719117 | | [14.3, 19.4] | 0.0039138086 | +--------------+--------------+ SELECT 2 rows in set (... sec) .. _scalar-ignore3vl: ``ignore3vl(boolean)`` ---------------------- The ``ignore3vl`` function operates on a boolean argument and eliminates the `3-valued logic`_ on the whole tree of :ref:`operators ` beneath it. More specifically, ``FALSE`` is :ref:`evaluated ` to ``FALSE``, ``TRUE`` to ``TRUE`` and ``NULL`` to ``FALSE``. Returns: ``boolean`` .. HIDE: cr> CREATE TABLE IF NOT EXISTS doc.t( ... int_array_col array(integer) ... ); CREATE OK, 1 row affected (... sec) cr> INSERT INTO doc.t(int_array_col) ... VALUES ([1,2,3, null]); INSERT OK, 1 row affected (... sec) cr> REFRESH table doc.t; REFRESH OK, 1 row affected (... sec) .. NOTE:: The main usage of the ``ignore3vl`` function is in the ``WHERE`` clause when a ``NOT`` operator is involved. Such filtering, with `3-valued logic`_, cannot be translated to an optimized query in the internal storage engine, and therefore can degrade performance. E.g.:: SELECT * FROM t WHERE NOT 5 = ANY(t.int_array_col); If we can ignore the `3-valued logic`_, we can write the query as:: SELECT * FROM t WHERE NOT IGNORE3VL(5 = ANY(t.int_array_col)); which will yield better performance (in execution time) than before. .. CAUTION:: If there are ``NULL`` values in the ``long_array_col``, in the case that ``5 = ANY(t.long_array_col)`` evaluates to ``NULL``, without the ``ignore3vl``, it would be evaluated as ``NOT NULL`` => ``NULL``, resulting to zero matched rows. With the ``IGNORE3VL`` in place it will be evaluated as ``NOT FALSE`` => ``TRUE`` resulting to all rows matching the filter. E.g:: cr> SELECT * FROM t ... WHERE NOT 5 = ANY(t.int_array_col); +---------------+ | int_array_col | +---------------+ +---------------+ SELECT 0 rows in set (... sec) :: cr> SELECT * FROM t ... WHERE NOT IGNORE3VL(5 = ANY(t.int_array_col)); +-----------------+ | int_array_col | +-----------------+ | [1, 2, 3, null] | +-----------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE IF EXISTS doc.t; DROP OK, 1 row affected (... sec) Synopsis:: ignore3vl(boolean) Example:: cr> SELECT ... ignore3vl(true) as v1, ... ignore3vl(false) as v2, ... ignore3vl(null) as v3; +------+-------+-------+ | v1 | v2 | v3 | +------+-------+-------+ | TRUE | FALSE | FALSE | +------+-------+-------+ SELECT 1 row in set (... sec) .. _scalar-vector: Vector functions ================ .. _scalar_vector_similarity: ``vector_similarity(float_vector, float_vector)`` -------------------------------------------------------- Returns similarity of 2 :ref:`FLOAT_VECTORS ` as a :ref:`FLOAT ` typed value. Similarity is based on euclidean distance and belongs to range ``(0,1]``. If 2 vectors coincide, function returns maximal possible similarity 1. The more distance between vectors is, the closer similarity gets to 0. If at least one argument is ``NULL``, function returns ``NULL``. An example:: cr> SELECT vector_similarity([1.2, 1.3], [10.2, 10.3]) AS vs; +-------------+ | vs | +-------------+ | 0.006134969 | +-------------+ SELECT 1 row in set (... sec) .. _3-valued logic: https://en.wikipedia.org/wiki/Null_(SQL)#Comparisons_with_NULL_and_the_three-valued_logic_(3VL) .. _available time zones: https://www.joda.org/joda-time/timezones.html .. _CrateDB PDO: https://crate.io/docs/pdo/en/latest/connect.html .. _Euclidean geometry: https://en.wikipedia.org/wiki/Euclidean_geometry .. _formatter: https://docs.oracle.com/javase/7/docs/api/java/util/Formatter.html .. _geodetic: https://en.wikipedia.org/wiki/Geodesy .. _GeoJSON: https://geojson.org/ .. _Haversine formula: https://en.wikipedia.org/wiki/Haversine_formula .. _Java DateTimeFormatter: https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html .. _Java DecimalFormat: https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html .. _Java Regular Expressions: https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html .. _Joda-Time: https://www.joda.org/joda-time/ .. _Lucene Regular Expressions: https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/util/automaton/RegExp.html .. _MySQL date_format: https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_date-format .. _WKT: https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry(connect)= # Connect to a CrateDB cluster This documentation section is about connecting your applications to CrateDB and CrateDB Cloud, using database drivers, and compatibility-adapters and -dialects. ## Protocol Support CrateDB supports both the [HTTP protocol] and the [PostgreSQL wire protocol], which ensures that many clients that work with PostgreSQL, will also work with CrateDB. Through corresponding drivers, CrateDB is compatible with [ODBC], [JDBC], and other database API specifications. While we generally recommend the PostgreSQL interface (PG) for maximum compatibility in PostgreSQL environments, the HTTP interface supports [CrateDB bulk operations] and [CrateDB BLOBs], which are not supported by the PostgreSQL protocol. The HTTP protocol can also be used to connect from environments where PostgreSQL-based communication is not applicable. ## Configure In order to connect to CrateDB, your application or driver needs to be configured with corresponding connection properties. Please note that different applications and drivers may obtain connection properties in different formats. ::::::{tab-set} :::::{tab-item} CrateDB and CrateDB Cloud ::::{grid} :margin: 0 :padding: 0 :::{grid-item} :columns: 4 :margin: 0 :padding: 0 **Connection properties** :Host: ``.cratedb.net :Port: 5432 (PostgreSQL) or
4200 (HTTP) :User: `` :Pass: `` ::: :::{grid-item} :columns: 8 :margin: 0 :padding: 0 :class: driver-slim **Connection-string examples**

**Native PostgreSQL, psql** ``` postgresql://:@.cratedb.net:5432/doc ``` **JDBC: PostgreSQL pgJDBC** ``` jdbc:postgresql://:@.cratedb.net:5432/doc ``` **JDBC: CrateDB JDBC, e.g. Apache Flink** ``` jdbc:crate://:@.cratedb.net:5432/doc ``` **HTTP: Admin UI, CLI, CrateDB drivers** ``` https://:@.cratedb.net:4200/ ``` **SQLAlchemy** ``` crate://:@.cratedb.net:4200/?schema=doc&ssl=true ``` ::: :::: ::::: :::::{tab-item} CrateDB on localhost ::::{grid} :margin: 0 :padding: 0 :::{grid-item} :columns: 4 :margin: 0 :padding: 0 **Connection properties** :Host: localhost :Port: 5432 (PostgreSQL) or
4200 (HTTP) :User: `crate` :Pass: (empty) ::: :::{grid-item} :columns: 8 :margin: 0 :padding: 0 :class: driver-slim **Connection-string examples**

**Native PostgreSQL, psql** ``` postgresql://crate@localhost:5432/doc ``` **JDBC: PostgreSQL pgJDBC** ``` jdbc:crate://crate@localhost:5432/doc ``` **JDBC: CrateDB JDBC, e.g. Apache Flink** ``` jdbc:crate://:@localhost:5432/doc ``` **HTTP: Admin UI, CLI, CrateDB drivers** ``` http://crate@localhost:4200/ ``` **SQLAlchemy** ``` crate://crate@localhost:4200/?schema=doc ``` ::: :::: ::::: :::::: ```{tip} - CrateDB's fixed catalog name is `crate`, the default schema name is `doc`. - CrateDB does not implement the notion of a database, however tables can be created in different [schemas]. - When asked for a *database name*, specifying a schema name (any), or the fixed catalog name `crate` may be applicable. - If a database-/schema-name is omitted while connecting, the PostgreSQL drivers may default to the "username". - The predefined [superuser] on an unconfigured CrateDB cluster is called `crate`, defined without a password. - For authenticating properly, please learn about the available [authentication] options. ``` ## Client Libraries This section lists drivers and adapters for relevant programming languages, frameworks, and environments. ### PostgreSQL The drivers listed in this section all use the [CrateDB PostgreSQL interface]. ::::{sd-table} :widths: 2 3 5 2 :row-class: top-border :::{sd-row} ```{sd-item} ``` ```{sd-item} **Driver/Adapter** ``` ```{sd-item} **Description** ``` ```{sd-item} **Info** ``` ::: :::{sd-row} ```{sd-item} \- ``` ```{sd-item} [PostgreSQL ODBC](https://odbc.postgresql.org/) ``` ```{sd-item} The official PostgreSQL ODBC Driver. For connecting to CrateDB from any environment that supports it. ``` ```{sd-item} ``` ::: :::{sd-row} ```{sd-item} .NET ``` ```{sd-item} [Npgsql](https://www.npgsql.org/) ``` ```{sd-item} An open source ADO.NET Data Provider for PostgreSQL, for program written in C#, Visual Basic, and F#. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/npgsql/npgsql?label=latest)](https://github.com/npgsql/npgsql) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/csharp-npgsql) ``` ::: :::{sd-row} ```{sd-item} .NET ``` ```{sd-item} [CrateDB Npgsql fork](https://cratedb.com/docs/npgsql/) ``` ```{sd-item} This fork of the official driver was needed prior to CrateDB 4.2. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/npgsql?label=latest)](https://github.com/crate/npgsql) ``` ::: :::{sd-row} ```{sd-item} Golang ``` ```{sd-item} [pgx](https://github.com/jackc/pgx) ``` ```{sd-item} A pure Go driver and toolkit for PostgreSQL. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/jackc/pgx?label=latest)](https://github.com/jackc/pgx) ``` ::: :::{sd-row} ```{sd-item} Java ``` ```{sd-item} [PostgreSQL JDBC](https://jdbc.postgresql.org/) ``` ```{sd-item} The official PostgreSQL JDBC Driver. For connecting to CrateDB from any environment that supports it. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/pgjdbc/pgjdbc?label=latest)](https://github.com/pgjdbc/pgjdbc) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#java) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/java-jdbc) ``` ::: :::{sd-row} ```{sd-item} Java ``` ```{sd-item} [CrateDB PgJDBC fork](https://cratedb.com/docs/jdbc/) ``` ```{sd-item} For connecting to CrateDB with specialized type system support and other tweaks. Ignores the `ROLLBACK` statement and the `hstore` and `jsonb` extensions. ``` ```{sd-item} [![](https://img.shields.io/maven-central/v/io.crate/crate-jdbc?label=latest)](https://github.com/crate/pgjdbc) ``` ::: :::{sd-row} ```{sd-item} Node.js ``` ```{sd-item} [node-postgres](https://node-postgres.com/) ``` ```{sd-item} A collection of Node.js modules for interfacing with a PostgreSQL database using JavaScript or TypeScript. Has support for callbacks, promises, async/await, connection pooling, prepared statements, cursors, streaming results, C/C++ bindings, rich type parsing, and more. ``` ```{sd-item} [![](https://img.shields.io/npm/v/pg?label=latest&color=blue)](https://github.com/brianc/node-postgres) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#javascript) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [PDO_PGSQL](https://www.php.net/manual/en/ref.pdo-pgsql.php) ``` ```{sd-item} For connecting to CrateDB from PHP, supporting its PDO interface. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/php/php-src?label=latest)](https://github.com/php/php-src/tree/master/ext/pdo_pgsql) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/php-pdo) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [AMPHP](https://amphp.org/) ``` ```{sd-item} For connecting to CrateDB using AMPHP, an Async PostgreSQL client for PHP. AMPHP is a collection of high-quality, event-driven libraries for PHP designed with fibers and concurrency in mind. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/amphp/postgres?label=latest)](https://github.com/amphp/postgres) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/php-amphp) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [aoipg](https://github.com/aio-libs/aiopg) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio` (PEP-3156/tulip) framework. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/aio-libs/aiopg?label=latest)](https://github.com/aio-libs/aiopg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#aiopg) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [asyncpg](https://github.com/MagicStack/asyncpg) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio`. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/MagicStack/asyncpg?label=latest)](https://github.com/MagicStack/asyncpg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#psycopg2) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [psycopg3](https://www.psycopg.org/psycopg3/docs/) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio`. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/psycopg/psycopg?label=latest)](https://github.com/psycopg/psycopg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#psycopg3) ``` ::: :::: ### HTTP The drivers listed in this section all use the [CrateDB HTTP interface]. ::::{sd-table} :widths: 2 3 5 2 :row-class: top-border :::{sd-row} ```{sd-item} ``` ```{sd-item} **Driver/Adapter** ``` ```{sd-item} **Description** ``` ```{sd-item} **Info** ``` ::: :::{sd-row} ```{sd-item} MicroPython ``` ```{sd-item} [micropython-cratedb](https://github.com/crate/micropython-cratedb) ``` ```{sd-item} A MicroPython library connecting to the CrateDB HTTP API. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/micropython-cratedb?label=latest)](https://github.com/crate/micropython-cratedb) ``` ::: :::{sd-row} ```{sd-item} Node.js ``` ```{sd-item} [node-crate](https://www.npmjs.com/package/node-crate) ``` ```{sd-item} A JavaScript library connecting to the CrateDB HTTP API. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/megastef/node-crate?label=latest)](https://github.com/megastef/node-crate) [![](https://img.shields.io/badge/example-application-darkcyan)](https://github.com/crate/devrel-shipping-forecast-geo-demo) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [CrateDB PDO driver](https://cratedb.com/docs/pdo/) ``` ```{sd-item} For connecting to CrateDB from PHP. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-pdo?label=latest)](https://github.com/crate/crate-pdo) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#php) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [CrateDB DBAL adapter](https://cratedb.com/docs/dbal/) ``` ```{sd-item} For connecting to CrateDB from PHP, using DBAL and Doctrine. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-dbal?label=latest)](https://github.com/crate/crate-dbal) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#php) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [CrateDB Python driver](https://cratedb.com/docs/python/) ``` ```{sd-item} For connecting to CrateDB from Python. Has support for [CrateDB BLOBs]. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-python?label=latest)](https://github.com/crate/crate-python) [![](https://img.shields.io/badge/docs-by%20example-darkgreen)][python-dbapi-by-example] [![](https://img.shields.io/badge/example-snippet-darkcyan)](#crate-python) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [SQLAlchemy dialect](https://cratedb.com/docs/sqlalchemy-cratedb/) ``` ```{sd-item} For connecting to CrateDB from Python, using SQLAlchemy. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/sqlalchemy-cratedb?label=latest)](https://github.com/crate/sqlalchemy-cratedb) [![](https://img.shields.io/badge/docs-by%20example-darkgreen)][python-sqlalchemy-by-example] [![](https://img.shields.io/badge/example-snippet-darkcyan)](#sqlalchemy-cratedb) ``` ::: :::{sd-row} ```{sd-item} Ruby ``` ```{sd-item} [CrateDB Ruby driver](https://github.com/crate/crate_ruby) ``` ```{sd-item} A Ruby client library for CrateDB. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate_ruby?label=latest)](https://github.com/crate/crate_ruby) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#ruby) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/ruby) ``` ::: :::{sd-row} ```{sd-item} Ruby ``` ```{sd-item} [CrateDB ActiveRecord adapter](https://github.com/crate/activerecord-crate-adapter) ``` ```{sd-item} Ruby on Rails ActiveRecord adapter for CrateDB. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/activerecord-crate-adapter?label=latest)](https://github.com/crate/activerecord-crate-adapter) ``` ::: :::: ```{tip} Please visit the [](#build-status) page for an overview about the integration status of the client drivers listed above, and more. ``` ```{toctree} :maxdepth: 1 :hidden: java javascript php python ruby ``` [ADBC]: https://arrow.apache.org/docs/format/ADBC.html [Authentication]: inv:crate-reference:*:label#admin_auth [CrateDB BLOBs]: inv:crate-reference:*:label#blob_support [CrateDB bulk operations]: inv:crate-reference:*:label#http-bulk-ops [CrateDB HTTP interface]: inv:crate-reference:*:label#interface-http [CrateDB PostgreSQL interface]: inv:crate-reference:*:label#interface-postgresql [HTTP protocol]: https://en.wikipedia.org/wiki/HTTP [JDBC]: https://en.wikipedia.org/wiki/Java_Database_Connectivity [ODBC]: https://en.wikipedia.org/wiki/Open_Database_Connectivity [PostgreSQL wire protocol]: https://www.postgresql.org/docs/current/protocol.html [python-dbapi-by-example]: inv:crate-python:*:label#by-example [python-sqlalchemy-by-example]: inv:sqlalchemy-cratedb:*:label#by-example [schema]: inv:crate-reference:*:label#ddl-create-table-schemas [schemas]: inv:crate-reference:*:label#ddl-create-table-schemas [superuser]: inv:crate-reference:*:label#administration_user_management
.. highlight:: sh .. _conf-cluster-settings: ===================== Cluster-wide settings ===================== All current applied cluster settings can be read by querying the :ref:`sys.cluster.settings ` column. Most cluster settings can be :ref:`changed at runtime `. This is documented at each setting. .. rubric:: Table of contents .. contents:: :local: .. _applying-cluster-settings: Non-runtime cluster-wide settings --------------------------------- Cluster wide settings which cannot be changed at runtime need to be specified in the configuration of each node in the cluster. .. CAUTION:: Cluster settings specified via node configurations are required to be exactly the same on every node in the cluster for proper operation of the cluster. .. _conf_collecting_stats: Collecting stats ---------------- .. _stats.enabled: **stats.enabled** | *Default:* ``true`` | *Runtime:* ``yes`` A boolean indicating whether or not to collect statistical information about the cluster. .. CAUTION:: The collection of statistical information incurs a slight performance penalty, as details about every job and operation across the cluster will cause data to be inserted into the corresponding system tables. .. _stats.jobs_log_size: **stats.jobs_log_size** | *Default:* ``10000`` | *Runtime:* ``yes`` The maximum number of job records kept to be kept in the :ref:`sys.jobs_log ` table on each node. A job record corresponds to a single SQL statement to be executed on the cluster. These records are used for performance analytics. A larger job log produces more comprehensive stats, but uses more RAM. Older job records are deleted as newer records are added, once the limit is reached. Setting this value to ``0`` disables collecting job information. .. _stats.jobs_log_expiration: **stats.jobs_log_expiration** | *Default:* ``0s`` (disabled) | *Runtime:* ``yes`` The job record expiry time in seconds. Job records in the :ref:`sys.jobs_log ` table are periodically cleared if they are older than the expiry time. This setting overrides :ref:`stats.jobs_log_size `. If the value is set to ``0``, time based log entry eviction is disabled. .. NOTE:: If both the :ref:`stats.operations_log_size ` and :ref:`stats.operations_log_expiration ` settings are disabled, jobs will not be recorded. .. _stats.jobs_log_filter: **stats.jobs_log_filter** | *Default:* ``true`` (Include everything) | *Runtime:* ``yes`` An :ref:`expression ` to determine if a job should be recorded into ``sys.jobs_log``. The expression must :ref:`evaluate ` to a boolean. If it evaluates to ``true`` the statement will show up in ``sys.jobs_log`` until it's evicted due to one of the other rules. (expiration or size limit reached). The expression may reference all columns contained in ``sys.jobs_log``. A common use case is to include only jobs that took a certain amount of time to execute:: cr> SET GLOBAL "stats.jobs_log_filter" = $$ended - started > '5 minutes'::interval$$; SET OK, 1 row affected (... sec) .. _stats.jobs_log_persistent_filter: **stats.jobs_log_persistent_filter** | *Default:* ``false`` (Include nothing) | *Runtime:* ``yes`` An expression to determine if a job should also be recorded to the regular ``CrateDB`` log. Entries that match this filter will be logged under the ``StatementLog`` logger with the ``INFO`` level. This is similar to ``stats.jobs_log_filter`` except that these entries are persisted to the log file. This should be used with caution and shouldn't be set to an expression that matches many queries as the logging operation will block on IO and can therefore affect performance. A common use case is to use this for slow query logging. .. _stats.operations_log_size: **stats.operations_log_size** | *Default:* ``10000`` | *Runtime:* ``yes`` The maximum number of operations records to be kept in the :ref:`sys.operations_log ` table on each node. A job consists of one or more individual operations. Operations records are used for performance analytics. A larger operations log produces more comprehensive stats, but uses more RAM. Older operations records are deleted as newer records are added, once the limit is reached. Setting this value to ``0`` disables collecting operations information. .. _stats.operations_log_expiration: **stats.operations_log_expiration** | *Default:* ``0s`` (disabled) | *Runtime:* ``yes`` Entries of :ref:`sys.operations_log ` are cleared by a periodically job when they are older than the specified expire time. This setting overrides :ref:`stats.operations_log_size `. If the value is set to ``0`` the time based log entry eviction is disabled. .. NOTE:: If both settings :ref:`stats.operations_log_size ` and :ref:`stats.operations_log_expiration ` are disabled, no job information will be collected. .. _stats.service.interval: **stats.service.interval** | *Default:* ``24h`` | *Runtime:* ``yes`` Defines the refresh interval to refresh tables statistics used to produce optimal query execution plans. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). If the value provided is ``0`` then the refresh is disabled. .. CAUTION:: Using a very small value can cause a high load on the cluster. .. _stats.service.max_bytes_per_sec: **stats.service.max_bytes_per_sec** | *Default:* ``40mb`` | *Runtime:* ``yes`` Specifies the maximum number of bytes per second that can be read on data nodes to collect statistics. If this is set to a positive number, the underlying I/O operations of the :ref:`ANALYZE ` statement are throttled. If the value provided is ``0`` then the throttling is disabled. Shard limits ------------ .. _cluster.max_shards_per_node: **cluster.max_shards_per_node** | *Default:* 1000 | *Runtime:* ``yes`` The maximum number of open primary and replica shards per node. This setting is checked on a shard creation and doesn't limit shards for individual nodes. To limit the number of shards for each node, use :ref:`cluster.routing.allocation.total_shards_per_node ` setting. The actual limit being checked is ``max_shards_per_node * number of data nodes``. Any operations that would result in the creation of additional shard copies that would exceed this limit are rejected. For example. If you have 999 shards in the current cluster and you try to create a new table, the create table operation will fail. Similarly, if a write operation would lead to the creation of a new partition, the statement will fail. Each shard on a node requires some memory and increases the size of the cluster state. Having too many shards per node will impact the clusters stability and it is therefore discouraged to raise the limit above 1000. .. NOTE:: The maximum number of shards per node setting is also used for the :ref:`sys-node_checks_max_shards_per_node` check. .. NOTE:: If a table is created with :ref:`sql-create-table-number-of-replicas` provided as a range or default ``0-1`` value, the limit check accounts only for primary shards and not for possible expanded replicas and thus actual number of all shards can exceed the limit. .. _conf_usage_data_collector: Usage data collector -------------------- The settings of the Usage Data Collector are read-only and cannot be set during runtime. Please refer to :ref:`usage_data_collector` to get further information about its usage. .. _udc.enabled: **udc.enabled** | *Default:* ``true`` | *Runtime:* ``no`` ``true``: Enables the Usage Data Collector. ``false``: Disables the Usage Data Collector. .. _udc.initial_delay: **udc.initial_delay** | *Default:* ``10m`` | *Runtime:* ``no`` The delay for first ping after start-up. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _udc.interval: **udc.interval** | *Default:* ``24h`` | *Runtime:* ``no`` The interval a UDC ping is sent. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _udc.url: **udc.url** | *Default:* ``https://udc.crate.io`` | *Runtime:* ``no`` The URL the ping is sent to. .. _conf_graceful_stop: Graceful stop ------------- By default, when the CrateDB process stops it simply shuts down, possibly making some shards unavailable which leads to a *red* cluster state and lets some queries fail that required the now unavailable shards. In order to *safely* shutdown a CrateDB node, the graceful stop procedure can be used. The following cluster settings can be used to change the shutdown behaviour of nodes of the cluster: .. _cluster.graceful_stop.min_availability: **cluster.graceful_stop.min_availability** | *Default:* ``primaries`` | *Runtime:* ``yes`` | *Allowed values:* ``none | primaries | full`` ``none``: No minimum data availability is required. The node may shut down even if records are missing after shutdown. ``primaries``: At least all primary shards need to be available after the node has shut down. Replicas may be missing. ``full``: All records and all replicas need to be available after the node has shut down. Data availability is full. .. NOTE:: This option is ignored if there is only 1 node in a cluster! .. _cluster.graceful_stop.timeout: **cluster.graceful_stop.timeout** | *Default:* ``2h`` | *Runtime:* ``yes`` Defines the maximum waiting time in milliseconds for the :ref:`reallocation ` process to finish. The ``force`` setting will define the behaviour when the shutdown process runs into this timeout. The timeout expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _cluster.graceful_stop.force: **cluster.graceful_stop.force** | *Default:* ``false`` | *Runtime:* ``yes`` Defines whether ``graceful stop`` should force stopping of the node if it runs into the timeout which is specified with the `cluster.graceful_stop.timeout`_ setting. .. _conf_bulk_operations: Bulk operations --------------- SQL DML Statements involving a huge amount of rows like :ref:`sql-copy-from`, :ref:`sql-insert` or :ref:`ref-update` can take an enormous amount of time and resources. The following settings change the behaviour of those queries. .. _bulk.request_timeout: **bulk.request_timeout** | *Default:* ``1m`` | *Runtime:* ``yes`` Defines the timeout of internal shard-based requests involved in the execution of SQL DML Statements over a huge amount of rows. .. _conf_discovery: Discovery --------- Data sharding and work splitting are at the core of CrateDB. This is how we manage to execute very fast queries over incredibly large datasets. In order for multiple CrateDB nodes to work together a cluster needs to be formed. The process of finding other nodes with which to form a cluster is called discovery. Discovery runs when a CrateDB node starts and when a node is not able to reach the master node and continues until a master node is found or a new master node is elected. .. _discovery.seed_hosts: **discovery.seed_hosts** | *Default:* ``127.0.0.1`` | *Runtime:* ``no`` In order to form a cluster with CrateDB instances running on other nodes a list of seed master-eligible nodes needs to be provided. This setting should normally contain the addresses of all the master-eligible nodes in the cluster. In order to seed the discovery process the nodes listed here must be live and contactable. This setting contains either an array of hosts or a comma-delimited string. By default a node will bind to the available loopback and scan for local ports between ``4300`` and ``4400`` to try to connect to other nodes running on the same server. This default behaviour provides local auto clustering without any configuration. Each value should be in the form of host:port or host (where port defaults to the setting ``transport.tcp.port``). .. NOTE:: IPv6 hosts must be bracketed. .. _cluster.initial_master_nodes: **cluster.initial_master_nodes** | *Default:* ``not set`` | *Runtime:* ``no`` Contains a list of node names, full-qualified hostnames or IP addresses of the master-eligible nodes which will vote in the very first election of a cluster that's bootstrapping for the first time. By default this is not set, meaning it expects this node to join an already formed cluster. In development mode, with no discovery settings configured, this step is performed by the nodes themselves, but this auto-bootstrapping is designed to aim development and is not safe for production. In production you must explicitly list the names or IP addresses of the master-eligible nodes whose votes should be counted in the very first election. .. _discovery.type: **discovery.type** | *Default:* ``zen`` | *Runtime:* ``no`` | *Allowed values:* ``zen | single-node`` Specifies whether CrateDB should form a multiple-node cluster. By default, CrateDB discovers other nodes when forming a cluster and allows other nodes to join the cluster later. If ``discovery.type`` is set to ``single-node``, CrateDB forms a single-node cluster and the node won't join any other clusters. This can be useful for testing. It is not recommend to use this for production setups. The ``single-node`` mode also skips `bootstrap checks`_. .. CAUTION:: If a node is started without any :ref:`initial_master_nodes ` or a :ref:`discovery_type ` set to ``single-node`` (e.g., the default configuration), it will never join a cluster even if the configuration is subsequently changed. It is possible to force the node to forget its current cluster state by using the :ref:`cli-crate-node` CLI tool. However, be aware that this may result in data loss. .. _conf_host_discovery: Unicast host discovery ...................... As described above, CrateDB has built-in support for statically specifying a list of addresses that will act as the seed nodes in the discovery process using the `discovery.seed_hosts`_ setting. CrateDB also has support for several different mechanisms of seed nodes discovery. Currently there are two other discovery types: via DNS and via EC2 API. When a node starts up with one of these discovery types enabled, it performs a lookup using the settings for the specified mechanism listed below. The hosts and ports retrieved from the mechanism will be used to generate a list of unicast hosts for node discovery. The same lookup is also performed by all nodes in a cluster whenever the master is re-elected (see `Cluster Meta Data`). .. _discovery.seed_providers: **discovery.seed_providers** | *Default:* ``not set`` | *Runtime:* ``no`` | *Allowed values:* ``srv``, ``ec2`` See also: `Discovery`_. .. _conf_dns_discovery: Discovery via DNS ````````````````` Crate has built-in support for discovery via DNS. To enable DNS discovery the ``discovery.seed_providers`` setting needs to be set to ``srv``. The order of the unicast hosts is defined by the priority, weight and name of each host defined in the SRV record. For example:: _crate._srv.example.com. 3600 IN SRV 2 20 4300 crate1.example.com. _crate._srv.example.com. 3600 IN SRV 1 10 4300 crate2.example.com. _crate._srv.example.com. 3600 IN SRV 2 10 4300 crate3.example.com. would result in a list of discovery nodes ordered like:: crate2.example.com:4300, crate3.example.com:4300, crate1.example.com:4300 .. _discovery.srv.query: **discovery.srv.query** | *Runtime:* ``no`` The DNS query that is used to look up SRV records, usually in the format ``_service._protocol.fqdn`` If not set, the service discovery will not be able to look up any SRV records. .. _discovery.srv.resolver: **discovery.srv.resolver** | *Runtime:* ``no`` The hostname or IP of the DNS server used to resolve DNS records. If this is not set, or the specified hostname/IP is not resolvable, the default (system) resolver is used. Optionally a custom port can be specified using the format ``hostname:port``. .. _conf_ec2_discovery: Discovery on Amazon EC2 ``````````````````````` CrateDB has built-in support for discovery via the EC2 API. To enable EC2 discovery the ``discovery.seed_providers`` settings needs to be set to ``ec2``. .. _discovery.ec2.access_key: **discovery.ec2.access_key** | *Runtime:* ``no`` The access key ID to identify the API calls. .. _discovery.ec2.secret_key: **discovery.ec2.secret_key** | *Runtime:* ``no`` The secret key to identify the API calls. Following settings control the discovery: .. _discovery.ec2.groups: **discovery.ec2.groups** | *Runtime:* ``no`` A list of security groups; either by ID or name. Only instances with the given group will be used for unicast host discovery. .. _discovery.ec2.any_group: **discovery.ec2.any_group** | *Default:* ``true`` | *Runtime:* ``no`` Defines whether all (``false``) or just any (``true``) security group must be present for the instance to be used for discovery. .. _discovery.ec2.host_type: **discovery.ec2.host_type** | *Default:* ``private_ip`` | *Runtime:* ``no`` | *Allowed values:* ``private_ip``, ``public_ip``, ``private_dns``, ``public_dns`` Defines via which host type to communicate with other instances. .. _discovery.ec2.availability_zones: **discovery.ec2.availability_zones** | *Runtime:* ``no`` A list of availability zones. Only instances within the given availability zone will be used for unicast host discovery. .. _discovery.ec2.tag.name: **discovery.ec2.tag.** | *Runtime:* ``no`` EC2 instances for discovery can also be filtered by tags using the ``discovery.ec2.tag.`` prefix plus the tag name. E.g. to filter instances that have the ``environment`` tags with the value ``dev`` your setting will look like: ``discovery.ec2.tag.environment: dev``. .. _discovery.ec2.endpoint: **discovery.ec2.endpoint** | *Runtime:* ``no`` If you have your own compatible implementation of the EC2 API service you can set the endpoint that should be used. .. _conf_routing: Routing allocation ------------------ .. _cluster.routing.allocation.enable: **cluster.routing.allocation.enable** | *Default:* ``all`` | *Runtime:* ``yes`` | *Allowed values:* ``all | none | primaries | new_primaries`` ``all`` allows all :ref:`shard allocations `, the cluster can allocate all kinds of shards. ``none`` allows no shard allocations at all. No shard will be moved or created. ``primaries`` only primaries can be moved or created. This includes existing primary shards. ``new_primaries`` allows allocations for new primary shards only. This means that for example a newly added node will not allocate any replicas. However it is still possible to allocate new primary shards for new indices. Whenever you want to perform a zero downtime upgrade of your cluster you need to set this value before gracefully stopping the first node and reset it to ``all`` after starting the last updated node. .. NOTE:: This allocation setting has no effect on the :ref:`recovery ` of primary shards! Even when ``cluster.routing.allocation.enable`` is set to ``none``, nodes will recover their unassigned local primary shards immediately after restart. .. _cluster.routing.rebalance.enable: **cluster.routing.rebalance.enable** | *Default:* ``all`` | *Runtime:* ``yes`` | *Allowed values:* ``all | none | primaries | replicas`` Enables or disables rebalancing for different types of shards: - ``all`` allows shard rebalancing for all types of shards. - ``none`` disables shard rebalancing for any types. - ``primaries`` allows shard rebalancing only for primary shards. - ``replicas`` allows shard rebalancing only for replica shards. .. _cluster.routing.allocation.allow_rebalance: **cluster.routing.allocation.allow_rebalance** | *Default:* ``indices_all_active`` | *Runtime:* ``yes`` | *Allowed values:* ``always | indices_primary_active | indices_all_active`` Defines when rebalancing will happen based on the total state of all the indices shards in the cluster. Defaults to ``indices_all_active`` to reduce chatter during initial :ref:`recovery `. .. _cluster.routing.allocation.cluster_concurrent_rebalance: **cluster.routing.allocation.cluster_concurrent_rebalance** | *Default:* ``2`` | *Runtime:* ``yes`` Defines how many concurrent rebalancing tasks are allowed across all nodes. .. _cluster.routing.allocation.node_initial_primaries_recoveries: **cluster.routing.allocation.node_initial_primaries_recoveries** | *Default:* ``4`` | *Runtime:* ``yes`` Defines how many concurrent primary shard recoveries are allowed on a node. Since primary recoveries use data that is already on disk (as opposed to inter-node recoveries), recovery should be fast and so this setting can be higher than :ref:`node_concurrent_recoveries `. .. _cluster.routing.allocation.node_concurrent_recoveries: **cluster.routing.allocation.node_concurrent_recoveries** | *Default:* ``2`` | *Runtime:* ``yes`` Defines how many concurrent recoveries are allowed on a node. .. _conf-routing-allocation-balance: Shard balancing ............... You can configure how CrateDB attempts to balance shards across a cluster by specifying one or more property *weights*. CrateDB will consider a cluster to be balanced when no further allowed action can bring the weighted properties of each node closer together. .. NOTE:: Balancing may be restricted by other settings (e.g., :ref:`attribute-based ` and :ref:`disk-based ` shard allocation). .. _cluster.routing.allocation.balance.shard: **cluster.routing.allocation.balance.shard** | *Default:* ``0.45f`` | *Runtime:* ``yes`` Defines the weight factor for shards :ref:`allocated ` on a node (float). Raising this raises the tendency to equalize the number of shards across all nodes in the cluster. .. NOTE:: :ref:`cluster.routing.allocation.balance.shard` and :ref:`cluster.routing.allocation.balance.index` cannot be both set to ``0.0f``. .. _cluster.routing.allocation.balance.index: **cluster.routing.allocation.balance.index** | *Default:* ``0.55f`` | *Runtime:* ``yes`` Defines a factor to the number of shards per index :ref:`allocated ` on a specific node (float). Increasing this value raises the tendency to equalize the number of shards per index across all nodes in the cluster. .. NOTE:: :ref:`cluster.routing.allocation.balance.shard` and :ref:`cluster.routing.allocation.balance.index` cannot be both set to ``0.0f``. .. _cluster.routing.allocation.balance.threshold: **cluster.routing.allocation.balance.threshold** | *Default:* ``1.0f`` | *Runtime:* ``yes`` Minimal optimization value of operations that should be performed (non negative float). Increasing this value will cause the cluster to be less aggressive about optimising the shard balance. .. _conf-routing-allocation-attributes: Attribute-based shard allocation ................................ You can control how shards are allocated to specific nodes by setting :ref:`custom attributes ` on each node (e.g., server rack ID or node availability zone). After doing this, you can define :ref:`cluster-wide attribute awareness ` and then configure :ref:`cluster-wide attribute filtering `. .. SEEALSO:: For an in-depth example of using custom node attributes, check out the `multi-zone setup how-to guide`_. .. _conf-routing-allocation-awareness: Cluster-wide attribute awareness ````````````````````````````````` To make use of :ref:`custom attributes ` for :ref:`attribute-based ` :ref:`shard allocation `, you must configure *cluster-wide attribute awareness*. .. _cluster.routing.allocation.awareness.attributes: **cluster.routing.allocation.awareness.attributes** | *Runtime:* ``no`` You may define :ref:`custom node attributes ` which can then be used to do awareness based on the :ref:`allocation ` of a shard and its replicas. For example, let's say we want to use an attribute named ``rack_id``. We start two nodes with ``node.attr.rack_id`` set to ``rack_one``. Then we create a single table with five shards and one replica. The table will be fully deployed on the current nodes (five shards and one replica each, making a total of 10 shards). Now, if we start two more nodes with ``node.attr.rack_id`` set to ``rack_two``, CrateDB will relocate shards to even out the number of shards across the nodes. However, a shard and its replica will not be allocated to nodes sharing the same ``rack_id`` value. The ``awareness.attributes`` setting supports using several values. .. _cluster.routing.allocation.awareness.force.\*.values: **cluster.routing.allocation.awareness.force.\*.values** | *Runtime:* ``no`` Attributes on which :ref:`shard allocation ` will be forced. Here, ``*`` is a placeholder for the awareness attribute, which can be configured using the :ref:`cluster.routing.allocation.awareness.attributes ` setting. For example, let's say we configured forced shard allocation for an awareness attribute named ``zone`` with ``values`` set to ``zone1, zone2``. Start two nodes with ``node.attr.zone`` set to ``zone1``. Then, create a table with five shards and one replica. The table will be created, but only five shards will be allocated (with no replicas). The replicas will only be allocated when we start one or more nodes with ``node.attr.zone`` set to ``zone2``. .. _conf-routing-allocation-filtering: Cluster-wide attribute filtering ```````````````````````````````` To control how CrateDB uses :ref:`custom attributes ` for :ref:`attribute-based ` :ref:`shard allocation `, you must configure *cluster-wide attribute filtering*. .. NOTE:: CrateDB will retroactively enforce filter definitions. If a new filter would prevent newly created matching shards from being allocated to a node, CrateDB would also move any *existing* matching shards away from that node. .. _cluster.routing.allocation.include.*: **cluster.routing.allocation.include.*** | *Runtime:* ``yes`` Only :ref:`allocate shards ` on nodes where at least **one** of the specified values matches the attribute. For example:: cluster.routing.allocation.include.zone: "zone1,zone2"` This setting can be overridden for individual tables by the related :ref:`table setting `. .. _cluster.routing.allocation.exclude.*: **cluster.routing.allocation.exclude.*** | *Runtime:* ``yes`` Only :ref:`allocate shards ` on nodes where **none** of the specified values matches the attribute. For example:: cluster.routing.allocation.exclude.zone: "zone1" This setting can be overridden for individual tables by the related :ref:`table setting `. Therefore, if a node is excluded from shard allocation by this cluster level setting, the node can still allocate shards if the table setting allows it. .. _cluster.routing.allocation.require.*: **cluster.routing.allocation.require.*** | *Runtime:* ``yes`` Used to specify a number of rules, which **all** of them must match for a node in order to :ref:`allocate a shard ` on it. This setting can be overridden for individual tables by the related :ref:`table setting `. .. _conf-routing-allocation-disk: Disk-based shard allocation ........................... .. _cluster.routing.allocation.disk.threshold_enabled: **cluster.routing.allocation.disk.threshold_enabled** | *Default:* ``true`` | *Runtime:* ``yes`` Prevent :ref:`shard allocation ` on nodes depending of the disk usage. .. _cluster.routing.allocation.disk.watermark.low: **cluster.routing.allocation.disk.watermark.low** | *Default:* ``85%`` | *Runtime:* ``yes`` Defines the lower disk threshold limit for :ref:`shard allocations `. New shards will not be allocated on nodes with disk usage greater than this value. It can also be set to an absolute bytes value (like e.g. ``500mb``) to prevent the cluster from allocating new shards on node with less free disk space than this value. .. _cluster.routing.allocation.disk.watermark.high: **cluster.routing.allocation.disk.watermark.high** | *Default:* ``90%`` | *Runtime:* ``yes`` Defines the higher disk threshold limit for :ref:`shard allocations `. The cluster will attempt to relocate existing shards to another node if the disk usage on a node rises above this value. It can also be set to an absolute bytes value (like e.g. ``500mb``) to relocate shards from nodes with less free disk space than this value. .. _cluster.routing.allocation.disk.watermark.flood_stage: **cluster.routing.allocation.disk.watermark.flood_stage** | *Default:* ``95%`` | *Runtime:* ``yes`` Defines the threshold on which CrateDB enforces a read-only block on every index that has at least one :ref:`shard allocated ` on a node with at least one disk exceeding the flood stage. .. NOTE:: :ref:`sql-create-table-blocks-read-only-allow-delete` setting is automatically reset to ``FALSE`` for the tables if the disk space is freed and the threshold is undershot. ``cluster.routing.allocation.disk.watermark`` settings may be defined as percentages or bytes values. However, it is not possible to mix the value types. By default, the cluster will retrieve information about the disk usage of the nodes every 30 seconds. This can also be changed by setting the `cluster.info.update.interval`_ setting. .. NOTE:: The watermark settings are also used for the :ref:`sys-node_checks_watermark_low` and :ref:`sys-node_checks_watermark_high` node check. Setting ``cluster.routing.allocation.disk.threshold_enabled`` to false will disable the allocation decider, but the node checks will still be active and warn users about running low on disk space. .. _cluster.routing.allocation.total_shards_per_node: **cluster.routing.allocation.total_shards_per_node** | *Default*: ``-1`` | *Runtime*: ``yes`` Limits the number of primary and replica shards that can be :ref:`allocated ` per node. A value of ``-1`` means unlimited. Setting this to ``1000``, for example, will prevent CrateDB from assigning more than 1000 shards per node. A node with 1000 shards would be excluded from allocation decisions and CrateDB would attempt to allocate shards to other nodes, or leave shards unassigned if no suitable node can be found. .. NOTE:: If a table is created with :ref:`sql-create-table-number-of-replicas` provided as a range or default ``0-1`` value, the limit check accounts only for primary shards and not for possible expanded replicas and thus actual number of all shards can exceed the limit. .. _indices.recovery: Recovery -------- .. _indices.recovery.max_bytes_per_sec: **indices.recovery.max_bytes_per_sec** | *Default:* ``40mb`` | *Runtime:* ``yes`` Specifies the maximum number of bytes that can be transferred during :ref:`shard recovery ` per seconds. Limiting can be disabled by setting it to ``0``. This setting allows to control the network usage of the recovery process. Higher values may result in higher network utilization, but also faster recovery process. .. _indices.recovery.retry_delay_state_sync: **indices.recovery.retry_delay_state_sync** | *Default:* ``500ms`` | *Runtime:* ``yes`` Defines the time to wait after an issue caused by cluster state syncing before retrying to :ref:`recover `. .. _indices.recovery.retry_delay_network: **indices.recovery.retry_delay_network** | *Default:* ``5s`` | *Runtime:* ``yes`` Defines the time to wait after an issue caused by the network before retrying to :ref:`recover `. .. _indices.recovery.internal_action_timeout: **indices.recovery.internal_action_timeout** | *Default:* ``15m`` | *Runtime:* ``yes`` Defines the timeout for internal requests made as part of the :ref:`recovery `. .. _indices.recovery.internal_action_long_timeout: **indices.recovery.internal_action_long_timeout** | *Default:* ``30m`` | *Runtime:* ``yes`` Defines the timeout for internal requests made as part of the :ref:`recovery ` that are expected to take a long time. Defaults to twice :ref:`internal_action_timeout `. .. _indices.recovery.recovery_activity_timeout: **indices.recovery.recovery_activity_timeout** | *Default:* ``30m`` | *Runtime:* ``yes`` :ref:`Recoveries ` that don't show any activity for more then this interval will fail. Defaults to :ref:`internal_action_long_timeout `. .. _indices.recovery.max_concurrent_file_chunks: **indices.recovery.max_concurrent_file_chunks** | *Default:* ``2`` | *Runtime:* ``yes`` Controls the number of file chunk requests that can be sent in parallel per :ref:`recovery `. As multiple recoveries are already running in parallel, controlled by :ref:`cluster.routing.allocation.node_concurrent_recoveries `, increasing this expert-level setting might only help in situations where peer recovery of a single shard is not reaching the total inbound and outbound peer recovery traffic as configured by :ref:`indices.recovery.max_bytes_per_sec `, but is CPU-bound instead, typically when using transport-level security or compression. Memory management ----------------- .. _memory.allocation.type: **memory.allocation.type** | *Default:* ``on-heap`` | *Runtime:* ``yes`` Supported values are ``on-heap`` and ``off-heap``. This influences if memory is preferably allocated in the heap space or in the off-heap/direct memory region. Setting this to ``off-heap`` doesn't imply that the heap won't be used anymore. Most allocations will still happen in the heap space but some operations will be allowed to utilize off heap buffers. .. warning:: Using ``off-heap`` is considered **experimental**. .. _memory.operation_limit: **memory.operation_limit** | *Default:* ``0`` | *Runtime:* ``yes`` Default value for the :ref:`memory.operation_limit session setting `. Changing the cluster setting will only affect new sessions, not existing sessions. Example statement to update the default value to 1 GB, i.e. 1073741824 bytes:: cr> SET GLOBAL "memory.operation_limit" = 1073741824; SET OK, 1 row affected (... sec) Operations that hit this memory limit will trigger a ``CircuitBreakingException`` that can be handled in the application to inform the user about too much memory consumption for the particular query. Query circuit breaker --------------------- The Query circuit breaker will keep track of the used memory during the execution of a query. If a query consumes too much memory or if the cluster is already near its memory limit it will terminate the query to ensure the cluster keeps working. .. _indices.breaker.query.limit: **indices.breaker.query.limit** | *Default:* ``60%`` | *Runtime:* ``yes`` Specifies the limit for the query breaker. Provided values can either be absolute values (interpreted as a number of bytes), byte sizes (like ``1mb``) or percentage of the heap size (like ``12%``). A value of ``-1`` disables breaking the circuit while still accounting memory usage. Request circuit breaker ----------------------- The request circuit breaker allows an estimation of required heap memory per request. If a single request exceeds the specified amount of memory, an exception is raised. .. _indices.breaker.request.limit: **indices.breaker.request.limit** | *Default:* ``60%`` | *Runtime:* ``yes`` Specifies the JVM heap limit for the request circuit breaker. Accounting circuit breaker -------------------------- Tracks things that are held in memory independent of queries. For example the memory used by Lucene for segments. .. _indices.breaker.accounting.limit: **indices.breaker.accounting.limit** | *Default:* ``100%`` | *Runtime:* ``yes`` Specifies the JVM heap limit for the accounting circuit breaker .. CAUTION:: This setting is deprecated and will be removed in a future release. .. _stats.breaker.log: Stats circuit breakers ---------------------- Settings that control the behaviour of the stats circuit breaker. There are two breakers in place, one for the jobs log and one for the operations log. For each of them, the breaker limit can be set. .. _stats.breaker.log.jobs.limit: **stats.breaker.log.jobs.limit** | *Default:* ``5%`` | *Runtime:* ``yes`` The maximum memory that can be used from :ref:`CRATE_HEAP_SIZE ` for the :ref:`sys.jobs_log ` table on each node. When this memory limit is reached, the job log circuit breaker logs an error message and clears the :ref:`sys.jobs_log ` table completely. .. _stats.breaker.log.operations.limit: **stats.breaker.log.operations.limit** | *Default:* ``5%`` | *Runtime:* ``yes`` The maximum memory that can be used from :ref:`CRATE_HEAP_SIZE ` for the :ref:`sys.operations_log ` table on each node. When this memory limit is reached, the operations log circuit breaker logs an error message and clears the :ref:`sys.operations_log ` table completely. Total circuit breaker --------------------- .. _indices.breaker.total.limit: **indices.breaker.total.limit** | *Default:* ``95%`` | *Runtime:* ``yes`` The maximum memory that can be used by all aforementioned circuit breakers together. Even if an individual circuit breaker doesn't hit its individual limit, queries might still get aborted if several circuit breakers together would hit the memory limit configured in ``indices.breaker.total.limit``. Thread pools ------------ Every node uses a number of thread pools to schedule operations, each pool is dedicated to specific operations. The most important pools are: * ``write``: Used for write operations like index, update or delete. The ``type`` defaults to ``fixed``. * ``search``: Used for read operations like ``SELECT`` statements. The ``type`` defaults to ``fixed``. * ``get``: Used for some specific read operations. For example on tables like ``sys.shards`` or ``sys.nodes``. The ``type`` defaults to ``fixed``. * ``refresh``: Used for :ref:`refresh operations `. The ``type`` defaults to ``scaling``. * ``generic``: For internal tasks like cluster state management. The ``type`` defaults to ``scaling``. * ``logical_replication``: For logical replication operations. The ``type`` defaults to fixed. In addition to those pools, there are also ``netty`` worker threads which are used to process network requests and many CPU bound actions like query analysis and optimization. The thread pool settings are expert settings which you generally shouldn't need to touch. They are dynamically sized depending on the number of available CPU cores. If you're running multiple services on the same machine you instead should change the :ref:`processors` setting. Increasing the number of threads for a pool can result in degraded performance due to increased context switching and higher memory footprint. If you observe idle CPU cores increasing the thread pool size is rarely the right course of action, instead it can be a sign that: - Operations are blocked on disk IO. Increasing the thread pool size could result in more operations getting queued and blocked on disk IO without increasing throughput but decreasing it due to more memory pressure and additional garbage collection activity. - Individual operations running single threaded. Not all tasks required to process a SQL statement can be further subdivided and processed in parallel, but many operations default to use one thread per shard. Because of this, you can consider increasing the number of shards of a table to increase the parallelism of a single individual statement and increase CPU core utilization. As an alternative you can try increasing the concurrency on the client side, to have CrateDB process more SQL statements in parallel. .. _thread_pool..type: **thread_pool..type** | *Runtime:* ``no`` | *Allowed values:* ``fixed | scaling`` ``fixed`` holds a fixed size of threads to handle the requests. It also has a queue for pending requests if no threads are available. ``scaling`` ensures that a thread pool holds a dynamic number of threads that are proportional to the workload. Settings for fixed thread pools ............................... If the type of a thread pool is set to ``fixed`` there are a few optional settings. .. _thread_pool..size: **thread_pool..size** | *Runtime:* ``no`` Number of threads. The default size of the different thread pools depend on the number of available CPU cores. .. _thread_pool..queue_size: **thread_pool..queue_size** | *Default write:* ``200`` | *Default search:* ``1000`` | *Default get:* ``100`` | *Runtime:* ``no`` Size of the queue for pending requests. A value of ``-1`` sets it to unbounded. If you have burst workloads followed by periods of inactivity it can make sense to increase the ``queue_size`` to allow a node to buffer more queries before rejecting new operations. But be aware, increasing the queue size if you have sustained workloads will only increase the system's memory consumption and likely degrade performance. .. _overload_protection: Overload Protection ------------------- Overload protection settings control how many resources operations like ``INSERT INTO FROM QUERY`` or ``COPY`` can use. The values here serve as a starting point for an algorithm that dynamically adapts the effective concurrency limit based on the round-trip time of requests. Whenever one of these settings is updated, the previously calculated effective concurrency is reset. Changing settings will only effect new operations, already running operations will continue with the previous settings. .. _overload_protection.dml.initial_concurrency: **overload_protection.dml.initial_concurrency** | *Default:* ``5`` | *Runtime:* ``yes`` The initial number of concurrent operations allowed per target node. .. _overload_protection.dml.min_concurrency: **overload_protection.dml.min_concurrency** | *Default:* ``1`` | *Runtime:* ``yes`` The minimum number of concurrent operations allowed per target node. .. _overload_protection.dml.max_concurrency: **overload_protection.dml.max_concurrency** | *Default:* ``100`` | *Runtime:* ``yes`` The maximum number of concurrent operations allowed per target node. .. _overload_protection.dml.queue_size: **overload_protection.dml.queue_size** | *Default:* ``25`` | *Runtime:* ``yes`` How many operations are allowed to queue up. Metadata -------- .. _cluster.info.update.interval: **cluster.info.update.interval** | *Default:* ``30s`` | *Runtime:* ``yes`` Defines how often the cluster collect metadata information (e.g. disk usages etc.) if no concrete event is triggered. .. _metadata_gateway: Metadata gateway ................ The following settings can be used to configure the behavior of the :ref:`metadata gateway `. .. _gateway.expected_nodes: **gateway.expected_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The setting ``gateway.expected_nodes`` defines the total number of nodes expected in the cluster. It is evaluated together with ``gateway.recover_after_nodes`` to decide if the cluster can start with recovery. .. CAUTION:: This setting is deprecated and will be removed in a future version. Use `gateway.expected_data_nodes`_ instead. .. _gateway.expected_data_nodes: **gateway.expected_data_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The setting ``gateway.expected_data_nodes`` defines the total number of data nodes expected in the cluster. It is evaluated together with ``gateway.recover_after_data_nodes`` to decide if the cluster can start with recovery. .. _gateway.recover_after_time: **gateway.recover_after_time** | *Default:* ``5m`` | *Runtime:* ``no`` The ``gateway.recover_after_time`` setting defines the time to wait for the number of nodes set in ``gateway.expected_data_nodes`` (or ``gateway.expected_nodes``) to become available, before starting the recovery, once the number of nodes defined in ``gateway.recover_after_data_nodes`` (or ``gateway.recover_after_nodes``) has already been reached. This setting is ignored if ``gateway.expected_data_nodes`` or ``gateway.expected_nodes`` are set to 0 or 1. It also has no effect if ``gateway.recover_after_data_nodes`` is set equal to ``gateway.expected_data_nodes`` (or ``gateway.recover_after_nodes`` is set equal to ``gateway.expected_nodes``). The cluster also proceeds to immediate recovery, and the default 5 minutes waiting time does not apply, if neither this setting nor ``expected_nodes`` and ``expected_data_nodes`` are explicitly set. .. _gateway.recover_after_nodes: **gateway.recover_after_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The ``gateway.recover_after_nodes`` setting defines the number of nodes that need to join the cluster before the cluster state recovery can start. If this setting is ``-1`` and ``gateway.expected_nodes`` is set, all nodes will need to be started before the cluster state recovery can start. Please note that proceeding with recovery when not all nodes are available could trigger the promotion of shards and the creation of new replicas, generating disk and network load, which may be unnecessary. You can use a combination of this setting with ``gateway.recovery_after_time`` to mitigate this risk. .. CAUTION:: This setting is deprecated and will be removed in CrateDB 5.0. Use `gateway.recover_after_data_nodes`_ instead. .. _gateway.recover_after_data_nodes: **gateway.recover_after_data_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The ``gateway.recover_after_data_nodes`` setting defines the number of data nodes that need to be started before the cluster state recovery can start. If this setting is ``-1`` and ``gateway.expected_data_nodes`` is set, all data nodes will need to be started before the cluster state recovery can start. Please note that proceeding with recovery when not all data nodes are available could trigger the promotion of shards and the creation of new replicas, generating disk and network load, which may be unnecessary. You can use a combination of this setting with ``gateway.recovery_after_time`` to mitigate this risk. Logical Replication ------------------- Replication process can be configured by the following settings. Settings are dynamic and can be changed in runtime. .. _replication.logical.ops_batch_size: **replication.logical.ops_batch_size** | *Default:* ``50000`` | *Min value:* ``16`` | *Runtime:* ``yes`` Maximum number of operations to replicate from the publisher cluster per poll. Represents a number to advance a sequence. .. _replication.logical.reads_poll_duration: **replication.logical.reads_poll_duration** | *Default:* ``50`` | *Runtime:* ``yes`` The maximum time (in milliseconds) to wait for changes per poll operation. When a subscriber makes another one request to a publisher, it has ``reads_poll_duration`` milliseconds to harvest changes from the publisher. .. _replication.logical.recovery.chunk_size: **replication.logical.recovery.chunk_size** | *Default:* ``1MB`` | *Min value:* ``1KB`` | *Max value:* ``1GB`` | *Runtime:* ``yes`` Chunk size to transfer files during the initial recovery of a replicating table. .. _replication.logical.recovery.max_concurrent_file_chunks: **replication.logical.recovery.max_concurrent_file_chunks** | *Default:* ``2`` | *Min value:* ``1`` | *Max value:* ``5`` | *Runtime:* ``yes`` Controls the number of file chunk requests that can be sent in parallel between clusters during the recovery. .. hide: cr> RESET GLOBAL "stats.jobs_log_filter" RESET OK, 1 row affected (... sec) cr> RESET GLOBAL "memory.operation_limit" RESET OK, 1 row affected (... sec) .. _bootstrap checks: https://crate.io/docs/crate/howtos/en/latest/admin/bootstrap-checks.html .. _multi-zone setup how-to guide: https://crate.io/docs/crate/howtos/en/latest/clustering/multi-zone-setup.html.. highlight:: sh .. vale off .. _conf-node-settings: ====================== Node-specific settings ====================== .. rubric:: Table of contents .. contents:: :local: Basics ====== .. _cluster.name: **cluster.name** | *Default:* ``crate`` | *Runtime:* ``no`` The name of the CrateDB cluster the node should join to. .. _node.name: **node.name** | *Runtime:* ``no`` The name of the node. If no name is configured a random one will be generated. .. NOTE:: Node names must be unique in a CrateDB cluster. .. _node.store_allow_mmap: **node.store.allow_mmap** | *Default:* ``true`` | *Runtime:* ``no`` The setting indicates whether or not memory-mapping is allowed. Node types ========== CrateDB supports different types of nodes. The following settings can be used to differentiate nodes upon startup: .. _node.master: **node.master** | *Default:* ``true`` | *Runtime:* ``no`` Whether or not this node is able to get elected as *master* node in the cluster. .. _node.data: **node.data** | *Default:* ``true`` | *Runtime:* ``no`` Whether or not this node will store data. Using different combinations of these two settings, you can create four different types of node. Each type of node is differentiated by what types of load it will handle. Tabulating the truth values for ``node.master`` and ``node.data`` produces a truth table outlining the four different types of node: +---------------+-----------------------------+------------------------------+ | | **Master** | **No master** | +---------------+-----------------------------+------------------------------+ | **Data** | Handle all loads. | Handles client requests and | | | | query execution. | +---------------+-----------------------------+------------------------------+ | **No data** | Handles cluster management. | Handles client requests. | +---------------+-----------------------------+------------------------------+ Nodes marked as ``node.master`` will only handle cluster management if they are elected as the cluster master. All other loads are shared equally. General ======= .. _node.sql.read_only: **node.sql.read_only** | *Default:* ``false`` | *Runtime:* ``no`` If set to ``true``, the node will only allow SQL statements which are resulting in read operations. .. _statement_timeout: **statement_timeout** | *Default:* ``0`` | *Runtime:* ``yes`` The maximum duration of any statement before it gets cancelled. This value is used as default value for the :ref:`statement_timeout session setting ` If ``0`` queries are allowed to run infinitely and don't get cancelled automatically. .. NOTE:: Updating this setting won't affect existing sessions, it will only take effect for new sessions. Networking ========== .. _conf_hosts: Hosts ----- .. _network.host: **network.host** | *Default:* ``_local_`` | *Runtime:* ``no`` The IP address CrateDB will bind itself to. This setting sets both the `network.bind_host`_ and `network.publish_host`_ values. .. _network.bind_host: **network.bind_host** | *Default:* ``_local_`` | *Runtime:* ``no`` This setting determines to which address CrateDB should bind itself to. .. _network.publish_host: **network.publish_host** | *Default:* ``_local_`` | *Runtime:* ``no`` This setting is used by a CrateDB node to publish its own address to the rest of the cluster. .. TIP:: Apart from IPv4 and IPv6 addresses there are some special values that can be used for all above settings: ========================= ================================================= ``_local_`` Any loopback addresses on the system, for example ``127.0.0.1``. ``_site_`` Any site-local addresses on the system, for example ``192.168.0.1``. ``_global_`` Any globally-scoped addresses on the system, for example ``8.8.8.8``. ``_[INTERFACE]_`` Addresses of a network interface, for example ``_en0_``. ========================= ================================================= .. _conf_ports: Ports ----- .. _http.port: **http.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB HTTP service will be bound to. It defaults to ``4200-4300``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. The HTTP protocol is used for the REST endpoint which is used by all clients except the Java client. .. _http.publish_port: **http.publish_port** | *Runtime:* ``no`` The port HTTP clients should use to communicate with the node. It is necessary to define this setting if the bound HTTP port (``http.port``) of the node is not directly reachable from outside, e.g. running it behind a firewall or inside a Docker container. .. _transport.tcp.port: **transport.tcp.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB transport service will be bound to. It defaults to ``4300-4400``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. The transport protocol is used for internal node-to-node communication. .. _transport.publish_port: **transport.publish_port** | *Runtime:* ``no`` The port that the node publishes to the cluster for its own discovery. It is necessary to define this setting when the bound tranport port (``transport.tcp.port``) of the node is not directly reachable from outside, e.g. running it behind a firewall or inside a Docker container. .. _psql.port: **psql.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB Postgres service will be bound to. It defaults to ``5432-5532``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. Advanced TCP settings --------------------- Any interface that uses TCP (Postgres wire, HTTP & Transport protocols) shares the following settings: .. _network.tcp.no_delay: **network.tcp.no_delay** | *Default:* ``true`` | *Runtime:* ``no`` Enable or disable the `Nagle's algorithm`_ for buffering TCP packets. Buffering is disabled by default. .. _network.tcp.keep_alive: **network.tcp.keep_alive** | *Default:* ``true`` | *Runtime:* ``no`` Configures the ``SO_KEEPALIVE`` option for sockets, which determines whether they send TCP keepalive probes. .. _network.tcp.reuse_address: **network.tcp.reuse_address** | *Default:* ``true`` on non-windows machines and ``false`` otherwise | *Runtime:* ``no`` Configures the ``SO_REUSEADDRS`` option for sockets, which determines whether they should reuse the address. .. _network.tcp.send_buffer_size: **network.tcp.send_buffer_size** | *Default:* ``-1`` | *Runtime:* ``no`` The size of the TCP send buffer (`SO_SNDBUF`_ socket option). By default not explicitly set. .. _network.tcp.receive_buffer_size: **network.tcp.receive_buffer_size** | *Default:* ``-1`` | *Runtime:* ``no`` The size of the TCP receive buffer (`SO_RCVBUF`_ socket option). By default not explicitly set. .. NOTE:: Each setting in this section has its counterpart for HTTP and transport. To provide a protocol specific setting, remove ``network`` prefix and use either ``http`` or ``transport`` instead. For example, no_delay can be configured as ``http.tcp.no_delay`` and ``transport.tcp.no_delay``. Please note, that PG interface takes its settings from transport. Transport settings ------------------ .. _transport.connect_timeout: **transport.connect_timeout** | *Default:* ``30s`` | *Runtime:* ``no`` The connect timeout for initiating a new connection. .. _transport.compress: **transport.compress** | *Default:* ``false`` | *Runtime:* ``no`` Set to `true` to enable compression (DEFLATE) between all nodes. .. _transport.ping_schedule: **transport.ping_schedule** | *Default:* ``-1`` | *Runtime:* ``no`` Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to `-1` (disabled). It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections. Paths ===== .. NOTE:: Relative paths are relative to :ref:`CRATE_HOME `. Absolute paths override this behavior. .. _path.conf: **path.conf** | *Default:* ``config`` | *Runtime:* ``no`` Filesystem path to the directory containing the configuration files ``crate.yml`` and ``log4j2.properties``. .. _path.data: **path.data** | *Default:* ``data`` | *Runtime:* ``no`` Filesystem path to the directory where this CrateDB node stores its data (table data and cluster metadata). Multiple paths can be set by using a comma separated list and each of these paths will hold full shards (instead of striping data across them). For example: .. code-block:: yaml path.data: /path/to/data1,/path/to/data2 When CrateDB finds striped shards at the provided locations (from CrateDB <0.55.0), these shards will be migrated automatically on startup. .. _path.logs: **path.logs** | *Default:* ``logs`` | *Runtime:* ``no`` Filesystem path to a directory where log files should be stored. Can be used as a variable inside ``log4j2.properties``. For example: .. code-block:: yaml appender: file: file: ${path.logs}/${cluster.name}.log .. _path.repo: **path.repo** | *Runtime:* ``no`` A list of filesystem or UNC paths where repositories of type :ref:`sql-create-repo-fs` may be stored. Without this setting a CrateDB user could write snapshot files to any directory that is writable by the CrateDB process. To safeguard against this security issue, the possible paths have to be whitelisted here. See also :ref:`location ` setting of repository type ``fs``. .. SEEALSO:: :ref:`blobs.path ` Plug-ins ======== .. _plugin.mandatory: **plugin.mandatory** | *Runtime:* ``no`` A list of plug-ins that are required for a node to startup. If any plug-in listed here is missing, the CrateDB node will fail to start. CPU === .. _processors: **processors** | *Runtime:* ``no`` The number of processors is used to set the size of the thread pools CrateDB is using appropriately. If not set explicitly, CrateDB will infer the number from the available processors on the system. In environments where the CPU amount can be restricted (like Docker) or when multiple CrateDB instances are running on the same hardware, the inferred number might be too high. In such a case, it is recommended to set the value explicitly. Memory ====== .. _bootstrap.memory_lock: **bootstrap.memory_lock** | *Default:* ``false`` | *Runtime:* ``no`` CrateDB performs poorly when the JVM starts swapping: you should ensure that it *never* swaps. If set to ``true``, CrateDB will use the ``mlockall`` system call on startup to ensure that the memory pages of the CrateDB process are locked into RAM. Garbage collection ================== CrateDB logs if JVM garbage collection on different memory pools takes too long. The following settings can be used to adjust these timeouts: .. _monitor.jvm.gc.collector.young.warn: **monitor.jvm.gc.collector.young.warn** | *Default:* ``1000ms`` | *Runtime:* ``no`` CrateDB will log a warning message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.young.info: **monitor.jvm.gc.collector.young.info** | *Default:* ``700ms`` | *Runtime:* ``no`` CrateDB will log an info message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.young.debug: **monitor.jvm.gc.collector.young.debug** | *Default:* ``400ms`` | *Runtime:* ``no`` CrateDB will log a debug message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.old.warn: **monitor.jvm.gc.collector.old.warn** | *Default:* ``10000ms`` | *Runtime:* ``no`` CrateDB will log a warning message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). .. _monitor.jvm.gc.collector.old.info: **monitor.jvm.gc.collector.old.info** | *Default:* ``5000ms`` | *Runtime:* ``no`` CrateDB will log an info message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). .. _monitor.jvm.gc.collector.old.debug: **monitor.jvm.gc.collector.old.debug** | *Default:* ``2000ms`` | *Runtime:* ``no`` CrateDB will log a debug message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). Authentication ============== .. _host_based_auth: Trust authentication -------------------- .. _auth.trust.http_default_user: **auth.trust.http_default_user** | *Default:* ``crate`` | *Runtime:* ``no`` The default user that should be used for authentication when clients connect to CrateDB via HTTP protocol and they do not specify a user via the ``Authorization`` request header. .. _auth.trust.http_support_x_real_ip: **auth.trust.http_support_x_real_ip** | *Default:* ``false`` | *Runtime:* ``no`` If enabled, the HTTP transport will trust the ``X-Real-IP`` header sent by the client to determine the client's IP address. This is useful when CrateDB is running behind a reverse proxy or load-balancer. For improved security, any ``_local_`` IP address (``127.0.0.1`` and ``::1``) defined in this header will be ignored. .. warning:: Enabling this setting can be a security risk, as it allows clients to impersonate other clients by sending a fake ``X-Real-IP`` header. Host-based authentication ------------------------- Authentication settings (``auth.host_based.*``) are node settings, which means that their values apply only to the node where they are applied and different nodes may have different authentication settings. .. _auth.host_based.enabled: **auth.host_based.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Setting to enable or disable Host Based Authentication (HBA). It is disabled by default. .. _jwt_defaults: JWT Based Authentication ........................ Default global settings for the :ref:`JWT authentication `. .. _auth.host_based.jwt.iss: **auth.host_based.jwt.iss** | *Runtime:* ``no`` Default value for the ``iss`` :ref:`JWT property `. If ``iss`` is set, user specific JWT properties are ignored. .. _auth.host_based.jwt.aud: **auth.host_based.jwt.aud** | *Runtime:* ``no`` Default value for the ``aud`` :ref:`JWT property `. If ``aud`` is set but ``iss`` is not, then global config is not complete and user specific JWT properties are used. HBA entries ........... The ``auth.host_based.config.`` setting is a group setting that can have zero, one or multiple groups that are defined by their group key (``${order}``) and their fields (``user``, ``address``, ``method``, ``protocol``, ``ssl``). .. _$(order): **${order}:** | An identifier that is used as a natural order key when looking up the host | based configuration entries. For example, an order key of ``a`` will be | looked up before an order key of ``b``. This key guarantees that the entry | lookup order will remain independent from the insertion order of the | entries. The :ref:`admin_hba` setting is a list of predicates that users can specify to restrict or allow access to CrateDB. The meaning of the fields of the are as follows: .. _auth.host_based.config.${order}.user: **auth.host_based.config.${order}.user** | *Runtime:* ``no`` | Specifies an existing CrateDB username, only ``crate`` user (superuser) is | available. If no user is specified in the entry, then all existing users | can have access. .. _auth.host_based.config.${order}.address: **auth.host_based.config.${order}.address** | *Runtime:* ``no`` | The client machine addresses that the client matches, and which are allowed | to authenticate. This field may contain an IPv4 address, an IPv6 address or | an IPv4 CIDR mask. For example: ``127.0.0.1`` or ``127.0.0.1/32``. It also | may contain a hostname or the special ``_local_`` notation which will match | both IPv4 and IPv6 connections from localhost. A hostname specification | that starts with a dot (.) matches a suffix of the actual hostname. | So .crate.io would match foo.crate.io but not just crate.io. If no address | is specified in the entry, then access to CrateDB is open for all hosts. .. _auth.host_based.config.${order}.method: **auth.host_based.config.${order}.method** | *Runtime:* ``no`` | The authentication method to use when a connection matches this entry. | Valid values are ``trust``, ``cert``, ``password`` and ``jwt``. If no | method is specified, the ``trust`` method is used by default. | See :ref:`auth_trust`, :ref:`auth_cert`, :ref:`auth_password` and | :ref:`auth_jwt` for more information about these methods. .. _auth.host_based.config.${order}.protocol: **auth.host_based.config.${order}.protocol** | *Runtime:* ``no`` | Specifies the protocol for which the authentication entry should be used. | If no protocol is specified, then this entry will be valid for all | protocols that rely on host based authentication see :ref:`auth_trust`). .. _auth.host_based.config.${order}.ssl: **auth.host_based.config.${order}.ssl** | *Default:* ``optional`` | *Runtime:* ``no`` | Specifies whether the client must use SSL/TLS to connect to the cluster. | If set to ``on`` then the client must be connected through SSL/TLS | otherwise is not authenticated. If set to ``off`` then the client must | *not* be connected via SSL/TLS otherwise is not authenticated. Finally | ``optional``, which is the value when the option is completely skipped, | means that the client can be authenticated regardless of SSL/TLS is used | or not. Example of config groups: .. code-block:: yaml auth.host_based.config: entry_a: user: crate address: 127.16.0.0/16 entry_b: method: trust entry_3: user: crate address: 172.16.0.0/16 method: trust protocol: pg ssl: on .. _ssl_config: Secured communications (SSL/TLS) ================================ Secured communications via SSL allows you to encrypt traffic between CrateDB nodes and clients connecting to them. Connections are secured using Transport Layer Security (TLS). .. _ssl.http.enabled: **ssl.http.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Set this to true to enable secure communication between the CrateDB node and the client through SSL via the HTTPS protocol. .. _ssl.psql.enabled: **ssl.psql.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Set this to true to enable secure communication between the CrateDB node and the client through SSL via the PostgreSQL wire protocol. .. _ssl.transport.mode: **ssl.transport.mode** | *Default:* ``legacy`` | *Runtime:* ``no`` For communication between nodes, choose: ``off`` SSL cannot be used ``legacy`` SSL is not used. If HBA is enabled, transport connections won't be verified Any reachable host can establish a connection. ``on`` SSL must be used .. _ssl.keystore_filepath: **ssl.keystore_filepath** | *Runtime:* ``no`` The full path to the node keystore file. .. _ssl.keystore_password: **ssl.keystore_password** | *Runtime:* ``no`` The password used to decrypt the keystore file defined with ``ssl.keystore_filepath``. .. _ssl.keystore_key_password: **ssl.keystore_key_password** | *Runtime:* ``no`` The password entered at the end of the ``keytool -genkey command``. .. NOTE:: Optionally trusted CA certificates can be stored separately from the node's keystore into a truststore for CA certificates. .. _ssl.truststore_filepath: **ssl.truststore_filepath** | *Runtime:* ``no`` The full path to the node truststore file. If not defined, then only a keystore will be used. .. _ssl.truststore_password: **ssl.truststore_password** | *Runtime:* ``no`` The password used to decrypt the truststore file defined with ``ssl.truststore_filepath``. .. _ssl.resource_poll_interval: **ssl.resource_poll_interval** | *Default:* ``5m`` | *Runtime:* ``no`` The frequency at which SSL files such as keystore and truststore are polled for changes. Cross-origin resource sharing (CORS) ==================================== Many browsers support the `same-origin policy`_ which requires web applications to explicitly allow requests across origins. The `cross-origin resource sharing`_ settings in CrateDB allow for configuring these. .. _http.cors.enabled: **http.cors.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Enable or disable `cross-origin resource sharing`_. .. _http.cors.allow-origin: **http.cors.allow-origin** | *Default:* ```` | *Runtime:* ``no`` Define allowed origins of a request. ``*`` allows *any* origin (which can be a substantial security risk) and by prepending a ``/`` the string will be treated as a :ref:`regular expression `. For example ``/https?:\/\/crate.io/`` will allow requests from ``https://crate.io`` and ``https://crate.io``. This setting disallows any origin by default. .. _http.cors.max-age: **http.cors.max-age** | *Default:* ``1728000`` (20 days) | *Runtime:* ``no`` Max cache age of a preflight request in seconds. .. _http.cors.allow-methods: **http.cors.allow-methods** | *Default:* ``OPTIONS, HEAD, GET, POST, PUT, DELETE`` | *Runtime:* ``no`` Allowed HTTP methods. .. _http.cors.allow-headers: **http.cors.allow-headers** | *Default:* ``X-Requested-With, Content-Type, Content-Length`` | *Runtime:* ``no`` Allowed HTTP headers. .. _http.cors.allow-credentials: **http.cors.allow-credentials** | *Default:* ``false`` | *Runtime:* ``no`` Add the ``Access-Control-Allow-Credentials`` header to responses. .. _`same-origin policy`: https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy .. _`cross-origin resource sharing`: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS Blobs ===== .. _blobs.path: **blobs.path** | *Runtime:* ``no`` Path to a filesystem directory where to store blob data allocated for this node. By default blobs will be stored under the same path as normal data. A relative path value is interpreted as relative to ``CRATE_HOME``. .. _ref-configuration-repositories: Repositories ============ Repositories are used to :ref:`backup ` a CrateDB cluster. .. _repositories.url.allowed_urls: **repositories.url.allowed_urls** | *Runtime:* ``no`` This setting only applies to repositories of type :ref:`sql-create-repo-url`. With this setting a list of urls can be specified which are allowed to be used if a repository of type ``url`` is created. Wildcards are supported in the host, path, query and fragment parts. This setting is a security measure to prevent access to arbitrary resources. In addition, the supported protocols can be restricted using the :ref:`repositories.url.supported_protocols ` setting. .. _repositories.url.supported_protocols: **repositories.url.supported_protocols** | *Default:* ``http``, ``https``, ``ftp``, ``file`` and ``jar`` | *Runtime:* ``no`` A list of protocols that are supported by repositories of type :ref:`sql-create-repo-url`. The ``jar`` protocol is used to access the contents of jar files. For more info, see the java `JarURLConnection documentation`_. See also the :ref:`path.repo ` Setting. .. _`JarURLConnection documentation`: https://docs.oracle.com/javase/8/docs/api/java/net/JarURLConnection.html Queries ======= .. _indices.query.bool.max_clause_count: **indices.query.bool.max_clause_count** | *Default:* ``8192`` | *Runtime:* ``no`` This setting limits the number of boolean clauses that can be generated by ``!= ANY()``, ``LIKE ANY()``, ``ILIKE ANY()``, ``NOT LIKE ANY()`` and ``NOT ILIKE ANY()`` :ref:`operators ` on arrays in order to prevent users from executing queries that may result in heavy memory consumption causing nodes to crash with ``OutOfMemory`` exceptions. Throws ``TooManyClauses`` errors when the limit is exceeded. .. NOTE:: You can avoid ``TooManyClauses`` errors by increasing this setting. The number of boolean clauses used can be larger than the elements of the array . Legacy ======= .. _legacy.table_function_column_naming: **legacy.table_function_column_naming** | *Default:* ``false`` | *Runtime:* ``no`` Since CrateDB 5.0.0, if the table function is not aliased and is returning a single base data typed column, the table function name is used as the column name. This setting can be set in order to use the naming convention prior to 5.0.0. The following table functions are affected by this setting: - :ref:`unnest ` - :ref:`regexp_matches ` - :ref:`generate_series ` When the setting is set and a single column is expected to be returned, the returned column will be named ``col1``, ``groups``, or ``col1`` respectively. .. NOTE:: Beware that if not all nodes in the cluster are consistently set or unset, the behaviour will depend on the node handling the query. .. _conf-node-lang-js: JavaScript language =================== .. _lang.js.enabled: **lang.js.enabled** | *Default:* ``true`` | *Runtime:* ``no`` Setting to enable or disable :ref:`JavaScript UDF ` support. .. _conf-fdw: Foreign Data Wrappers ===================== .. _fdw.allow_local: **fdw.allow_local** | *Default:* ``false`` | *Runtime:* ``no`` Allow access to local addresses via :ref:`Foreign data wrappers ` for all users. By default, only the ``crate`` superuser is allowed to access foreign servers that point to ``localhost``. .. warning:: Changing this to ``true`` can pose a security risk if you do not trust the users with ``AL`` permissions on the system. They can create foreign servers, foreign tables and user mappings that allow them to access services running on the same machine as CrateDB as if connected locally - effectively bypassing any restrictions set up via :ref:`admin_hba`. Do **not** change this if you don't understand the implications. .. _conf-node-attributes: Custom attributes ================= The ``node.attr`` namespace is a bag of custom attributes. Custom attributes can be :ref:`used to control shard allocation `. You can create any attribute you want under this namespace, like ``node.attr.key: value``. These attributes use the ``node.attr`` namespace to distinguish them from core node attribute like ``node.name``. Custom attributes are not validated by CrateDB, unlike core node attributes. .. vale on .. _plugins: https://github.com/crate/crate/blob/master/devs/docs/plugins.rst .. _Nagle's algorithm: https://en.wikipedia.org/wiki/Nagle%27s_algorithm .. _SO_RCVBUF: https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_RCVBUF .. _SO_SNDBUF: https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_SNDBUF
.. _concept-clustering: ========== Clustering ========== The aim of this document is to describe, on a high level, how the distributed SQL database CrateDB uses a shared nothing architecture to form high- availability, resilient database clusters with minimal effort of configuration. It will lay out the core concepts of the shared nothing architecture at the heart of CrateDB. The main difference to a `primary-secondary architecture`_ is that every node in the CrateDB cluster can perform every operation - hence all nodes are equal in terms of functionality (see :ref:`concept-node-components`) and are configured the same. .. rubric:: Table of contents .. contents:: :local: .. _concept-node-components: Components of a CrateDB Node ============================ To understand how a CrateDB cluster works it makes sense to first take a look at the components of an individual node of the cluster. .. _figure_1: .. figure:: interconnected-crate-nodes.png :align: center Figure 1 Multiple interconnected instances of CrateDB form a single database cluster. The components of each node are equal. :ref:`figure_1` shows that in CrateDB each node of a cluster contains the same components that (a) interface with each other, (b) with the same component from a different node and/or (c) with the outside world. These four major components are: SQL Handler, Job Execution Service, Cluster State Service, and Data Storage. SQL Handler ----------- The SQL Handler part of a node is responsible for three aspects: (a) handling incoming client requests, (b) parsing and analyzing the SQL statement from the request and (c) creating an execution plan based on the analyzed statement (`abstract syntax tree`_) The SQL Handler is the only of the four components that interfaces with the "outside world". CrateDB supports three protocols to handle client requests: (a) HTTP (b) a Binary Transport Protocol (c) the PostgreSQL Wire Protocol A typical request contains a SQL statement and its corresponding arguments. Job Execution Service --------------------- The Job Execution Service is responsible for the execution of a plan ("job"). The phases of the job and the resulting operations are already defined in the execution plan. A job usually consists of multiple operations that are distributed via the Transport Protocol to the involved nodes, be it the local node and/or one or multiple remote nodes. Jobs maintain IDs of their individual operations. This allows CrateDB to "track" (or for example "kill") distributed queries. Cluster State Service --------------------- The three main functions of the Cluster State Service are: (a) cluster state management, (b) election of the master node and (c) node discovery, thus being the main component for cluster building (as described in section :ref:`concept-clusters`). It communicates using the Binary Transport Protocol. Data storage ------------ The data storage component handles operations to store and retrieve data from disk based on the execution plan. In CrateDB, the data stored in the tables is sharded, meaning that tables are divided and (usually) stored across multiple nodes. Each shard is a separate Lucene index that is stored physically on the filesystem. Reads and writes are operating on a shard level. .. _concept-clusters: Multi-node setup: Clusters ========================== A CrateDB cluster is a set of two or more CrateDB instances (referred to as *nodes*) running on different hosts which form a single, distributed database. For inter-node communication, CrateDB uses a software specific transport protocol that utilizes byte-serialized Plain Old Java Objects (`POJOs`_) and operates on a separate port. That so-called "transport port" must be open and reachable from all nodes in the cluster. Cluster state management ------------------------ The cluster state is versioned and all nodes in a cluster keep a copy of the latest cluster state. However, only a single node in the cluster -- the *master node* -- is allowed to change the state at runtime. Settings, metadata, and routing ................................ The cluster state contains all necessary meta information to maintain the cluster and coordinate operations: * Global cluster settings * Discovered nodes and their status * Schemas of tables * The status and location of primary and replica shards When the master node updates the cluster state it will publish the new state to all nodes in the cluster and wait for all nodes to respond before processing the next update. .. _concept-master-election: Master Node Election -------------------- In a CrateDB cluster there can only be one master node at any single time. The cluster only becomes available to serve requests once a master has been elected, and a new election takes place if the current master node becomes unavailable. By default, all nodes are master-eligible, but :ref:`a node setting ` is available to indicate, if desired, that a node must not take on the role of master. To elect a master among the eligible nodes, a majority (``floor(half)+1``), also known as *quorum*, is required among a subset of all master-eligible nodes, this subset of nodes is known as the *voting configuration*. The *voting configuration* is a list which is persisted as part of the cluster state. It is maintained automatically in a way that makes so that split-brain scenarios are never possible. Every time a node joins the cluster, or leaves the cluster, even if it is for a few seconds, CrateDB re-evaluates the voting configuration. If the new number of master-eligible nodes in the cluster is odd, CrateDB will put them all in the voting configuration. If the number is even, CrateDB will exclude one of the master-eligible nodes from the voting configuration. The voting configuration is not shrunk below 3 nodes, meaning that if there were 3 nodes in the voting configuration and one of them becomes unavailable, they all stay in the voting configuration and a quorum of 2 nodes is still required. A master node rescinds its role if it cannot contact a quorum of nodes from the latest voting configuration. .. WARNING:: If you do infrastructure maintenance, please note that as nodes are shutdown or rebooted, they will temporarily leave the voting configuration, and for the cluster to elect a master a quorum is required among the nodes that were last in the voting configuration. For instance, if you have a 5-nodes cluster, with all nodes master-eligible, and node 1 is currently the master, and you shutdown node 5, then node 4, then node 3, the cluster will stay available as the voting configuration will have adapted to only have nodes 1, 2, and 3 on it. If you then shutdown one more node the cluster will become unavailable as a quorum of 2 nodes is now required and not available. To bring the cluster back online at this point you will require two nodes among 1, 2, and 3. Bringing back nodes 3, 4, and 5, will not be sufficient. .. NOTE:: Special `settings and considerations `_ applied prior to CrateDB version 4.0.0. .. _concept-discovery: Discovery --------- The process of finding, adding and removing nodes is done in the discovery module. .. _figure_2: .. figure:: discovery-process.png :align: center Figure 2 Phases of the node discovery process. n1 and n2 already form a cluster where n1 is the elected master node, n3 joins the cluster. The cluster state update happens in parallel! Node discovery happens in multiple steps: * CrateDB requires a list of potential host addresses for other CrateDB nodes when it is starting up. That list can either be provided by a static configuration or can be dynamically generated, for example by fetching DNS SRV records, querying the Amazon EC2 API, and so on. * All potential host addresses are pinged. Nodes which receive the request respond to it with information about the cluster it belongs to, the current master node, and its own node name. * Now that the node knows the master node, it sends a join request. The Primary verifies the incoming request and adds the new node to the cluster state that now contains the complete list of all nodes in the cluster. * The cluster state is then published across the cluster. This guarantees the common knowledge of the node addition. .. CAUTION:: If a node is started without any :ref:`initial_master_nodes ` or a :ref:`discovery_type ` set to ``single-node`` (e.g., the default configuration), it will never join a cluster even if the configuration is subsequently changed. It is possible to force the node to forget its current cluster state by using the :ref:`cli-crate-node` CLI tool. However, be aware that this may result in data loss. Networking ---------- In a CrateDB cluster all nodes have a direct link to all other nodes; this is known as `full mesh`_ topology. Due to simplicity reasons every node maintains a one-way connections to every other node in the network. The network topology of a 5 node cluster looks like this: .. _figure_3: .. figure:: mesh-network-topology.png :align: center :width: 50% Figure 3 Network topology of a 5 node CrateDB cluster. Each line represents a one-way connection. The advantages of a fully connected network are that it provides a high degree of reliability and the paths between nodes are the shortest possible. However, there are limitations in the size of such networked applications because the number of connections (c) grows quadratically with the number of nodes (n): .. code-block:: mathematica c = n * (n - 1) Cluster behavior ================ The fact that each CrateDB node in a cluster is equal allows applications and users to connect to any node and get the same response for the same operations. As already described in section :ref:`concept-node-components`, the SQL handler is responsible for handling incoming client SQL requests, either using the HTTP transport protocol, or the PostgreSQL wire protocol. The "handler node" that accepts the client request also returns the response to the client. It does neither redirect nor delegate the request to a different nodes. The handler node parses the incoming request into a syntax tree, analyzes it and creates an execution plan locally. Then the operations of the plan are executed in a distributed manner. The upstream of the final phase of the execution is always the handler which then returns the response to the client. Application use case ==================== In a conventional setup of an application using a primary-secondary database the deployed stack looks similar to this: .. _figure_4: .. figure:: conventional-deployment.png :align: center Figure 4 Conventional deployment of an application-database stack. However, this given setup does not scale because all application servers use the same, single entry point to the database for writes (the application can still read from secondaries) and if that entry point is unavailable the complete stack is broken. Choosing a shared nothing architecture allows DevOps to deploy their applications in an "elastic" manner without SPoF. The idea is to extend the shared nothing architecture from the database to the application which in most cases is stateless already. .. _figure_5: .. figure:: shared-nothing-deployment.png :align: center Figure 5 Elastic deployment making use of the shared nothing architecture. If you deploy an instance of CrateDB together with every application server you will be able to dynamically scale up and down your database backend depending on your needs. The application only needs to communicate to its "bound" CrateDB instance on localhost. The load balancer tracks the health of the hosts and if either the application or the database on a single host fails the complete host will taken out of the load balancing. .. _primary-secondary architecture: https://en.wikipedia.org/wiki/Master/slave_(technology) .. _abstract syntax tree: https://en.wikipedia.org/wiki/Abstract_syntax_tree .. _POJOs: https://en.wikipedia.org/wiki/Plain_Old_Java_Object .. _full mesh: https://en.wikipedia.org/wiki/Network_topology#Mesh .. _split-brain: https://en.wikipedia.org/wiki/Split-brain_(computing).. _concept-joins: ===== Joins ===== :ref:`Joins ` are essential operations in relational databases. They create a link between rows based on common values and allow the meaningful combination of these rows. CrateDB supports joins and due to its distributed nature allows you to work with large amounts of data. In this document we will present the following topics. First, an overview of the existing types of joins and algorithms provided. Then a description of how CrateDB implements them along with the necessary optimizations, which allows us to work with huge datasets. .. rubric:: Table of contents .. contents:: :local: .. _join-types: Join types ========== A join is a relational operation that merges two data sets based on certain properties. :ref:`joins_figure_1` shows which elements appear in which join. .. _joins_figure_1: .. figure:: joins.png :align: center Join Types From left to right, top to bottom: left join, right join, inner join, outer join, and cross join of a set L and R. .. _join-types-cross: Cross join ---------- A :ref:`cross join ` returns the Cartesian product of two or more relations. The result of the Cartesian product on the relation *L* and *R* consists of all possible permutations of each tuple of the relation *L* with every tuple of the relation *R*. .. _join-types-inner: Inner join ---------- An :ref:`inner join ` is a join of two or more relations that returns only tuples that satisfy the join condition. .. _join-types-equi: Equi Join ......... An *equi join* is a subset of an inner join and a comparison-based join, that uses equality comparisons in the join condition. The equi join of the relation *L* and *R* combines tuple *l* of relation *L* with a tuple *r* of the relation *R* if the join attributes of both tuples are identical. .. _join-types-outer: Outer join ---------- An :ref:`outer join ` returns a relation consisting of tuples that satisfy the join condition and dangling tuples from both or one of the relations, respectively to the outer join type. An outer join can be one of the following types: - **Left** outer join returns tuples of the relation *L* matching tuples of the relation *R* and dangling tuples of the relation *R* padded with null values. - **Right** outer join returns tuples of the relation *R* matching tuples of the relation *L* and dangling tuples from the relation *L* padded with null values. - **Full** outer join returns matching tuples of both relations and dangling tuples produced by left and right outer joins. .. _join-algos: Join algorithms =============== CrateDB supports (a) CROSS JOIN, (b) INNER JOIN, (c) EQUI JOIN, (d) LEFT JOIN, (e) RIGHT JOIN and (f) FULL JOIN. All of these join types are executed using the :ref:`nested loop join algorithm ` except for the :ref:`Equi Joins ` which are executed using the :ref:`hash join algorithm `. Special optimizations, according to the specific use cases, are applied to improve execution performance. .. _join-algos-nested-loop: Nested loop join ---------------- The **nested loop** join is the simplest join algorithm. One of the relations is nominated as the inner relation and the other as the outer relation. Each tuple of the outer relation is compared with each tuple of the inner relation and if the join condition is satisfied, the tuples of the relation *L* and *R* are concatenated and added into the returned virtual relation:: for each tuple l ∈ L do for each tuple r ∈ R do if l.a Θ r.b put tuple(l, r) in Q *Listing 1. Nested loop join algorithm.* .. _join-algos-nested-loop-prim: Primitive nested loop ..................... For joins on some relations, the nested loop operation can be executed directly on the handler node. Specifically for queries involving a CROSS JOIN or joins on `system tables`_ /`information_schema`_ each shard sends the data to the handler node. Afterwards, this node runs the nested loop, applies limits, etc. and ultimately returns the results. Similarly, joins can be nested, so instead of collecting data from shards the rows can be the result of a previous join or :ref:`table function `. .. _join-algos-nested-loop-dist: Distributed nested loop ....................... Relations are usually distributed to different nodes which require the nested loop to acquire the data before being able to join. After finding the locations of the required shards (which is done in the planning stage), the smaller data set (based on the row count) is broadcast amongst all the nodes holding the shards they are joined with. After that, each of the receiving nodes can start running a nested loop on the subset it has just received. Finally, these intermediate results are pushed to the original (handler) node to merge and return the results to the requesting client (see :ref:`joins_figure_2`). .. _joins_figure_2: .. figure:: nested-loop.png :align: center Nodes that are holding the smaller shards broadcast the data to the processing nodes which then return the results to the requesting node. Queries can be optimized if they contain (a) ORDER BY, (b) LIMIT, or (c) if INNER/EQUI JOIN. In any of these cases, the nested loop can be terminated earlier: - Ordering allows determining whether there are records left - Limit states the maximum number of rows that are returned Consequently, the number of rows is significantly reduced allowing the operation to complete much faster. .. _join-algos-hash: Hash join --------- The Hash Join algorithm is used to execute certain types of joins in a more efficient way than :ref:`Nested Loop `. .. _join-algos-hash-basic: Basic algorithm ............... The operation takes place in one node (the handler node to which the client is connected). The rows of the left relation of the join are read and a hashing algorithm is applied on the fields of the relation which participate in the join condition. The hashing algorithm generates a hash value which is used to store every row of the left relation in the proper position in a `hash table`_. Then the rows of the right relation are read one-by-one and the same hashing algorithm is applied on the fields that participate in the join condition. The generated hash value is used to make a lookup in the `hash table`_. If no entry is found, the row is skipped and the processing continues with the next row from the right relation. If an entry is found, the join condition is validated (handling hash collisions) and on successful validation the combined tuple of left and right relation is returned. .. _joins_figure_3: .. figure:: hash-join.png :align: center Basic hash join algorithm .. _join-algos-hash-block: Block hash join ............... The Hash Join algorithm requires a `hash table`_ containing all the rows of the left relation to be stored in memory. Therefore, depending on the size of the relation (number of rows) and the size of each row, the size of this hash table might exceed the available memory of the node executing the hash join. To resolve this limitation the rows of the left relation are loaded into the hash table in blocks. On every iteration the maximum available size of the `hash table`_ is calculated, based on the number of rows and size of each row of the table but also taking into account the available memory for query execution on the node. Once this block-size is calculated the rows of the left relation are processed and inserted into the `hash table`_ until the block-size is reached. The operation then starts reading the rows of the right relation, process them one-by-one and performs the lookup and the join condition validation. Once all rows from the right relation are processed the `hash table`_ is re-initialized based on a new calculation of the block size and a new iteration starts until all rows of the left relation are processed. With this algorithm the memory limitation is handled in expense of having to iterate over the rows of the right table multiple times, and it is the default algorithm used for Hash Join execution by CrateDB. .. _join-algos-hash-block-switch: Switch tables optimization '''''''''''''''''''''''''' Since the right table can be processed multiple times (number of rows from left / block-size) the right table should be the smaller (in number of rows) of the two relations participating in the join. Therefore, if originally the right relation is larger than the left the query planner performs a switch to take advantage of this detail and execute the hash join with better performance. .. _join-algos-hash-dist: Distributed block hash join ........................... Since CrateDB is a distributed database and a standard deployment consists of at least three nodes and in most case of much more, the Hash Join algorithm execution can be further optimized (performance-wise) by executing it in a distributed manner across the CrateDB cluster. The idea is to have the hash join operation executing in multiple nodes of the cluster in parallel and then merge the intermediate results before returning them to the client. A hashing algorithm is applied on every row of both the left and right relations. On the integer value generated by this hash, a modulo, by the number of nodes in the cluster, is applied and the resulting number defines the node to which this row should be sent. As a result each node of the cluster receives a subset of the whole data set which is ensured (by the hashing and modulo) to contain all candidate matching rows. Each node in turn performs a :ref:`block hash join ` on this subset and sends its result tuples to the handler node (where the client issued the query). Finally, the handler node receives those intermediate results, merges them and applies any pending ``ORDER BY``, ``LIMIT`` and ``OFFSET`` and sends the final result to the client. This algorithm is used by CrateDB for most cases of hash join execution except for joins on complex subqueries that contain ``LIMIT`` and/or ``OFFSET``. .. _joins_figure_4: .. figure:: distributed-hash-join.png :align: center Distributed hash join algorithm .. _join-optim: Join optimizations ================== .. _join-optim-optim-query-fetch: Query then fetch ---------------- Join operations on large relation can be extremely slow especially if the join is executed with a :ref:`Nested Loop `. - which means that the runtime complexity grows quadratically (O(n*m)). Specifically for :ref:`cross joins ` this results in large amounts of data sent over the network and loaded into memory at the handler node. CrateDB reduces the volume of data transferred by employing "Query Then Fetch": First, filtering and ordering are applied (if possible where the data is located) to obtain the required document IDs. Next, as soon as the final data set is ready, CrateDB fetches the selected fields and returns the data to the client. .. _join-optim-optim-push-down: Push-down query optimization ---------------------------- Complex queries such as Listing 2 require the planner to decide when to filter, sort, and merge in order to efficiently execute the plan. In this case, the query would be split internally into subqueries before running the join. As shown in :ref:`joins_figure_5`, first filtering (and ordering) is applied to relations *L* and *R* on their shards, then the result is directly broadcast to the nodes running the join. Not only will this behavior reduce the number of rows to work with, it also distributes the workload among the nodes so that the (expensive) join operation can run faster. .. code-block:: SQL SELECT L.a, R.x FROM L, R WHERE L.id = R.id AND L.b > 100 AND R.y < 10 ORDER BY L.a *Listing 2. An INNER JOIN on ids (effectively an EQUI JOIN) which can be optimized.* .. _joins_figure_5: .. figure:: push-down.png :align: center Figure 5 Complex queries are broken down into subqueries that are run on their shards before joining. .. _join-optim-cross-join-elimination: Cross join elimination ---------------------- The optimizer will try to eliminate cross joins in the query plan by changing the join-order. Cross join elimination replaces a CROSS JOIN with an INNER JOIN if query conditions used in the WHERE clause or other join conditions allow for it. An example: .. code-block:: SQL SELECT * FROM t1 CROSS JOIN t2 INNER JOIN t3 ON t3.z = t1.x AND t3.z = t2.y The cross join elimination will change the order of the query from t1, t2, t3 to t2, t1, t3 so that each join has a join condition and the CROSS JOIN can be replaced by an INNER JOIN. When reordering, it will try to preserve the original join order as much as possible. If a CROSS JOIN cannot be eliminated, the original join order will be maintained. This optimizer rule can be disabled with the :ref:`optimizer eliminate cross join session setting `:: SET optimizer_eliminate_cross_join = false Note that this setting is experimental, and may change in the future. .. _hash table: https://en.wikipedia.org/wiki/Hash_table .. _here: http://www.dcs.ed.ac.uk/home/tz/phd/thesis.pdf .. _information_schema: https://crate.io/docs/reference/sql/information_schema.html .. _system tables: https://crate.io/docs/reference/sql/system.html.. _concept-storage-consistency: ======================= Storage and consistency ======================= This document provides an overview on how CrateDB stores and distributes state across the cluster and what consistency and durability guarantees are provided. .. NOTE:: Since CrateDB heavily relies on Elasticsearch_ and Lucene_ for storage and cluster consensus, concepts shown here might look familiar to Elasticsearch_ users, since the implementation is actually reused from the Elasticsearch_ code. .. rubric:: Table of contents .. contents:: :local: .. _concept-data-storage: Data storage ============ Every table in CrateDB is sharded, which means that tables are divided and distributed across the nodes of a cluster. Each shard in CrateDB is a Lucene_ index broken down into segments getting stored on the filesystem. Physically the files reside under one of the configured data directories of a node. Lucene only appends data to segment files, which means that data written to the disc will never be mutated. This makes it easy for replication and :ref:`recovery `, since syncing a shard is simply a matter of fetching data from a specific marker. An arbitrary number of replica shards can be configured per table. Every operational replica holds a full synchronized copy of the primary shard. With read operations, there is no difference between executing the operation on the primary shard or on any of the replicas. CrateDB randomly assigns a shard when routing an operation. It is possible to configure this behavior if required, see our best practice guide on `multi zone setups `_ for more details. Write operations are handled differently than reads. Such operations are synchronous over all active replicas with the following flow: 1. The primary shard and the active replicas are looked up in the cluster state for the given operation. The primary shard and a quorum of the configured replicas need to be available for this step to succeed. 2. The operation is routed to the according primary shard for execution. 3. The operation gets executed on the primary shard 4. If the operation succeeds on the primary, the operation gets executed on all replicas in parallel. 5. After all replica operations finish the operation result gets returned to the caller. Should any replica shard fail to write the data or times out in step 5, it's immediately considered as unavailable. .. _concept-atomicity: Atomicity at document level =========================== Each row of a table in CrateDB is a semi structured document which can be nested arbitrarily deep through the use of object and array types. Operations on documents are atomic. Meaning that a write operation on a document either succeeds as a whole or has no effect at all. This is always the case, regardless of the nesting depth or size of the document. CrateDB does not provide transactions. Since every document in CrateDB has a version number assigned, which gets increased every time a change occurs, patterns like `Optimistic Concurrency Control`_ can help to work around that limitation. .. _concept-durability: Durability ========== Each shard has a WAL_ also known as translog. It guarantees that operations on documents are persisted to disk without having to issue a Lucene-Commit for every write operation. When the translog gets flushed all data is written to the persistent index storage of Lucene and the translog gets cleared. In case of an unclean shutdown of a shard, the transactions in the translog are getting replayed upon startup to ensure that all executed operations are permanent. The translog is also directly transferred when a newly allocated replica initializes itself from the primary shard. There is no need to flush segments to disc just for replica :ref:`recovery ` purposes. .. _concept-addressing-documents: Addressing documents ==================== Every document has an :ref:`internal identifier `. By default this identifier is derived from the primary key. Documents living in tables without a primary key are assigned a unique auto-generated ID automatically when created. Each document is :ref:`routed ` to one specific shard according to the :ref:`routing column `. All rows that have the same routing column row value are stored in the same shard. The routing column can be specified with the :ref:`CLUSTERED ` clause when creating the table. If a :ref:`primary key ` has been defined, it will be used as the default routing column, otherwise the :ref:`internal document ID ` is used. While transparent to the user, internally there are two ways how CrateDB accesses documents: :get: Direct access by identifier. Only applicable if the routing key and the identifier can be computed from the given query specification. (e.g: the full primary key is defined in the where clause). This is the most efficient way to access a document, since only a single shard gets accessed and only a simple index lookup on the ``_id`` field has to be done. :search: Query by matching against fields of documents across all candidate shards of the table. .. _concept-consistency: Consistency =========== CrateDB is eventual consistent for search operations. Search operations are performed on shared ``IndexReaders`` which besides other functionality, provide caching and reverse lookup capabilities for shards. An ``IndexReader`` is always bound to the Lucene_ segment it was started from, which means it has to be refreshed in order to see new changes, this is done on a time based manner, but can also be done manually (see `refresh`_). Therefore a search only sees a change if the according ``IndexReader`` was refreshed after that change occurred. If a query specification results in a ``get`` operation, changes are visible immediately. This is achieved by looking up the document in the translog first, which will always have the most recent version of the document. The common update and fetch use-case is therefore possible. If a client updates a row and that row is looked up by its primary key after that update the changes will always be visible, since the information will be retrieved directly from the translog. There is an exception to that, when the ``WHERE`` clause contains complex filtering and/or lots of Primary Key values. You can find more details :ref:`here `. .. NOTE:: ``Dirty reads`` can occur if the primary shard becomes isolated. The primary will only realize it is isolated once it tries to communicate with its replicas or the master. At that point, a write operation is already committed into the primary and can be read by a concurrent read operation. In order to minimise the window of opportunity for this phenomena, the CrateDB nodes communicate with the master every second (by default) and once they realise no master is known, they will start rejecting write operations. Every replica shard is updated synchronously with its primary and always carries the same information. Therefore it does not matter if the primary or a replica shard is accessed in terms of consistency. Only the refresh of the ``IndexReader`` affects consistency. .. NOTE:: Due to internal constraints, when the ``WHERE`` clause filters on multiple columns of a ``PRIMARY KEY``, but one or more of those columns is tested against lots of values, the query might be executed using a ``Collect`` operator instead of a ``Get``, thus records might be unavailable until a ``REFRESH`` is run. The same situation could occur when the ``WHERE`` clause contains long complex expressions, e.g.:: SELECT * FROM t WHERE pk1 IN () AND pk2 = 3 AND pk3 = 'foo' SELECT * FROM t WHERE pk1 = ? AND pk2 = ? AND pk3 = ? OR pk1 = ? AND pk2 = ? AND pk3 = ? OR pk1 = ? ... .. CAUTION:: Some outage conditions can affect these consistency claims. See the :ref:`resiliency documentation ` for details. .. _concept-cluster-metadata: Cluster meta data ================= Cluster meta data is held in the so called "Cluster State", which contains the following information: - Tables schemas. - Primary and replica shard locations. Basically just a mapping from shard number to the storage node. - Status of each shard, which tells if a shard is currently ready for use or has any other state like "initializing", "recovering" or cannot be assigned at all. - Information about discovered nodes and their status. - Configuration information. Every node has its own copy of the cluster state. However there is only one node allowed to change the cluster state at runtime. This node is called the "master" node and gets auto-elected. The "master" node has no special configuration at all, all nodes are master-eligible by default, and any master-eligible node can be elected as the master. There is also an automatic re-election if the current master node goes down for some reason. .. NOTE:: To avoid a scenario where two masters could be elected due to network partitioning, CrateDB automatically defines a quorum of nodes with which it is possible to elect a master. For details on how this works and further information see :ref:`concept-master-election`. To explain the flow of events for any cluster state change, here is an example flow for an ``ALTER TABLE`` statement which changes the schema of a table: #. A node in the cluster receives the ``ALTER TABLE`` request. #. The node sends out a request to the current master node to change the table definition. #. The master node applies the changes locally to the cluster state and sends out a notification to all affected nodes about the change. #. The nodes apply the change, so that they are now in sync with the master. #. Every node might take some local action depending on the type of cluster state change. .. _Elasticsearch: https://www.elasticsearch.org/ .. _Lucene: https://lucene.apache.org/core/ .. _WAL: https://en.wikipedia.org/wiki/Write-ahead_logging .. _Optimistic Concurrency Control: https://crate.io/docs/crate/reference/sql/occ.html .. _refresh: https://crate.io/docs/crate/reference/sql/refresh.html.. _concept-resiliency: ========== Resiliency ========== Distributed systems are tricky. All sorts of things can go wrong that are beyond your control. The network can go away, disks can fail, hosts can be terminated unexpectedly. CrateDB tries very hard to cope with these sorts of issues while maintaining :ref:`availability `, :ref:`consistency `, and :ref:`durability `. However, as with any distributed system, sometimes, *rarely*, things can go wrong. Thankfully, for most use-cases, if you follow best practices, you are extremely unlikely to experience resiliency issues with CrateDB. .. SEEALSO:: :ref:`Appendix: Resiliency Issues ` .. rubric:: Table of contents .. contents:: :local: .. _concept-resiliency-monitoring: Monitoring cluster status ========================= .. figure:: resilience-status.png :alt: The Admin UI in CrateDB has a status indicator which can be used to determine the stability and health of a cluster. A green status indicates that all shards have been replicated, are available, and are not being relocated. This is the lowest risk status for a cluster. The status will turn yellow when there is an elevated risk of encountering issues, due to a network failure or the failure of a node in the cluster. The status is updated every few seconds (variable on your cluster `ping configuration `_). .. _concept-resiliency-consistency: Storage and consistency ======================= Code that expects the behavior of an `ACID `_ compliant database like MySQL may not always work as expected with CrateDB. CrateDB does not support ACID transactions, but instead has :ref:`atomic operations ` and :ref:`eventual consistency ` at the row level. See also :ref:`concept-clustering`. Eventual consistency is the trade-off that CrateDB makes in exchange for high-availability that can tolerate most hardware and network failures. So you may observe data from different cluster nodes temporarily falling very briefly out-of-sync with each other, although over time they will become consistent. For example, you know a row has been written as soon as you get the ``INSERT OK`` message. But that row might not be read back by a subsequent ``SELECT`` on a different node until after a :ref:`table refresh ` (which typically occurs within one second). Your applications should be designed to work this storage and consistency model. .. _concept-resiliency-deployment: Deployment strategies ===================== When deploying CrateDB you should carefully weigh your need for high-availability and disaster recovery against operational complexity and expense. Which strategy you pick is going to depend on the specifics of your situation. Here are some considerations: - CrateDB is designed to scale horizontally. Make sure that your machines are fit for purpose, i.e. use SSDs, increase RAM up to 64 GB, and use multiple CPU cores when you can. But if you want to dynamically increase (or decrease) the capacity of your cluster, `add (or remove) nodes `_. - If availability is a concern, you can add `nodes across multiple zones `_ (e.g. different data centers or geographical regions). The more available your CrateDB cluster is, the more likely it is to withstand external failures like a zone going down. - If data durability or read performance is a concern, you can increase the number of :ref:`table replicas `. More table replicas means a smaller chance of permanent data loss due to hardware failures, in exchange for the use of more disk space and more intra-cluster network traffic. - If disaster recovery is important, you can :ref:`take regular snapshots ` and store those snapshots in cold storage. This safeguards data that has already been successfully written and replicated across the cluster. - CrateDB works well as part of a `data pipeline `_, especially if you’re working with high-volume data. If you have a message queue in front of CrateDB, you can configure it with backups and replay the data flow for a specific timeframe. This can be used to recover from issues that affect your data before it has been successfully written and replicated across the cluster. Indeed, this is the generally recommended way to recover from any of the rare consistency or data-loss issues you might encounter when CrateDB experiences network or hardware failures (see next section).(services)= # Services In the realm of CrateDB Cloud services, understanding your options is crucial for optimizing both performance and costs. This section of the documentation provides an in-depth look at the various service plans we offer, catering to a wide range of use-cases from small-scale applications to enterprise-level deployments. Our service plans are engineered for scalability, reliability, and performance. ::::{grid} 1 2 2 3 :margin: 0 0 0 0 :gutter: 01 :::{grid-item-card} Shared _non-critical workloads_ - Single Node - Up to 8 shared vCPUs - Up to 12 GiB RAM - Up to 1 TiB storage - Backups (once per day) - Single-AZ - Development support ::: :::{grid-item-card} Dedicated _production workloads_ - Up to 9 Nodes - Up to 144 vCPUs - Up to 495 GiB RAM - Up to 72 TiB storage - Backups (once per hour) - Multi-AZ - Basic Support --- - AWS / Azure Private Link - Uptime SLAs - Premium support available ::: :::{grid-item-card} Custom _large production workloads_ - Any cluster size - Custom compute options - Dedicated master nodes - Unlimited Storage - Custom Backups - Premium Support - AWS / Azure Private Link - Uptime SLAs
[Contact us for more information](https://cratedb.com/contact) ::: :::: ## Shared CrateDB Cloud's Shared Plan provides an affordable and easy-to-setup option for users who require basic database functionalities. The plan is built around the principle of cost-effectiveness and is particularly well-suited for development, testing, or non-critical production environments. **Node sizes** :::{table} :width: 700px :widths: 100, 100, 150, 150, 200 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Shared | CRFREE* | up to 2 | 2 GiB | 8 GiB | | Shared | S2 | up to 2 | 2 GiB | 8 GiB to 1TiB | | Shared | S4 | up to 3 | 4 GiB | 8 GiB to 1TiB | | Shared | S6 | up to 4 | 6 GiB | 8 GiB to 1TiB | | Shared | S12 | up to 8 | 12 GiB | 8 GiB to 1TiB | ::: **Variable Performance**
Since your cluster will be sharing vCPUs with other clusters, the performance may vary depending on the overall load on the underlying machine. This variability makes it less predictable compared to Dedicated Plans, where your database is running on dedicated resources. **Fair-Use Principle**
The Shared Plan operates on a fair-use principle. All users are expected to utilize the shared resources responsibly so that the system remains equitable and functional for everyone. :::{note} __*CRFREE__: This plan is aimed at new users who want to test and evaluate CrateDB Cloud and is perpetually free to use. Every user can deploy one free tier cluster in their organization without adding a payment method. Free tier clusters will be suspended if they are not used for 4 days, and deleted after 10 more days of inactivity. They cannot be scaled or changed. ::: ## Dedicated CrateDB Cloud's Dedicated Plan is designed to provide robust, scalable, and high-performance database solutions. Unlike the Shared Plan, the Dedicated Plan offers dedicated resources, including dedicated vCPUs, to meet the demands of high-availability and high-throughput environments. **Node sizes** :::{table} :width: 700px :widths: 200, 100, 100, 100, 200 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Dedicated | CR1 | 2 | 7 GiB | 32 GiB to 8 TiB | | Dedicated | CR2 | 4 | 14 GiB | 32 GiB to 8 TiB | | Dedicated | CR3 | 8 | 28 GiB | 32 GiB to 8 TiB | | Dedicated | CR4 | 16 | 55 GiB | 32 GiB to 8 TiB | ::: All Dedicated Plans can be scaled from 1 to 9 nodes. Depeding on the number of nodes, the overall cluster size can be scaled up to the following limits: **Cluster sizes** :::{table} :width: 700px :widths: 200, 100, 100, 150, 150 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Dedicated | CR1 | up to 18 | up to 63 GiB | up to 72 TiB | | Dedicated | CR2 | up to 36 | up to 126 GiB | up to 72 TiB | | Dedicated | CR3 | up to 72 | up to 252 GiB | up to 72 TiB | | Dedicated | CR4 | up to 144 | up to 495 GiB | up to 72 TiB | ::: **High Availability**
While it’s possible to start with just one node, for applications requiring high availability and fault tolerance, we recommend using at least three nodes. This ensures data replication and allows the cluster to handle node failures gracefully. Dedicated nodes are automatically deployed across three availability zones, and the specific zone for each node cannot be manually configured. A single dedicated node is placed in one zone, two nodes are distributed across two zones, and three or more nodes utilize all three availability zones, with nodes distributed as uniformly as possible. While a node count that is a multiple of three (e.g., 3, 6, 9, 12, etc.) provides optimal distribution across zones, it is not strictly required for high availability. ## Custom For organizations with specialized requirements that go beyond the Shared and Dedicated Plans, CrateDB Cloud offers custom solutions tailored to your specific needs. Our sales team and solutions engineers work closely with you to architect and deploy a custom cluster configuration, ensuring optimal performance, scalability, and security for your mission-critical applications. Whether you have stringent compliance mandates, complex integrations, or unique scalability challenges, our custom solutions provide the flexibility and expertise to meet your business objectives and technical requirements.
(organization-billing)= # Billing CrateDB Cloud offers flexible billing options to accommodate various needs and preferences. We only bill for actual usage of services, meaning there are no flat fees or minimum payments. ## Billing Information In the Billing tab under the Organization overview, you can add and edit your billing information, including your company address, country of residence, VAT info, invoice email, phone contacts, and more. You need to fill out this information whenever you use a paid offer on CrateDB Cloud, regardless of the payment method. ## Subscriptions After adding your billing information, you can add a subscription (payment method). We currently support the following payment methods: - **Cloud Marketplaces**: Available on [AWS](https://aws.amazon.com/marketplace/pp/prodview-l7rqf2xpeaubk), [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/crate.cratedbcloud), and [GCP](https://console.cloud.google.com/marketplace/product/cratedb-public/cratedb-gcp) Marketplaces. - **Credit Card**: Available worldwide. We use Stripe as our payment provider. - **Bank Transfer**: Available in the EU. We use Stripe as our payment provider. - **Custom Contract**: For large individual deployments, [contact](https://cratedb.com/contact) our sales team. :::{tip} **Marketplace Committed Spend:** All three marketplace offerings (AWS, Azure, GCP) can be applied towards any committed spend agreement (e.g., MACC) you have with the cloud provider. This effectively reduces your committed spend balance and allows you to use CrateDB as if it were a native service provided by the cloud provider. ::: ### Setup New Subscription ::::{tab} Cloud Marketplaces
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add Payment Method"**. 5. Select your preferred cloud marketplace. 6. Follow the instructions to sign up for **CrateDB Cloud** through the selected marketplace. 7. After completing the subscription in the marketplace, you will be redirected to CrateDB Cloud. 8. Connect one of your organizations to the created marketplace SaaS subscription. **You are now ready to deploy a cluster.** When you deploy a cluster, the usage will be reported regularly to the marketplace as usage amount in USD, and it will appear on your regular cloud provider's invoice. Depending on your settlement currency, a conversion may be applied. :::: ::::{tab} Credit Card
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add susbscription"**. 5. Click on **"Pay with credit or debit card"**. 6. Fill out the required information and click **"Save"**. **You are now ready to deploy a cluster.** The payment cycle is monthly and aligns with the calendar month. You will be charged for the previous period's usage and will receive an invoice at the email address you provided. If needed, you can add a new credit card to replace the current one. :::: ::::{tab} Bank Transfer
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add Payment Method"**. 5. Click on **"Ask to enable"** next to **"Pay via Bank Transfer"**. Complete the form, and our team will contact you to process your request. **After** your request has been approved: 1. Click on **"Add Payment Method"**. 2. Click on **"Pay via Bank Transfer"**. **You are now ready to deploy a cluster.** The payment cycle is monthly and aligns with the calendar month. You will be charged for the previous period's usage, and an invoice will be sent to the email address you provided. Payment is due within the specified terms. :::{caution} **Bank transfer payment is currently available only within the EU.** These payments are processed by Stripe and invoiced in Euros at a fixed USD/EUR exchange rate. You can find the current exchange rate within the CrateDB Cloud Console after setting up the bank transfer payment method. ::: :::: ::::{tab} Custom Contract
Custom contracts are individually tailored to your needs. Please [contact](https://cratedb.com/contact) our sales team to set up a custom contract that fits your requirements. :::: :::{note} To remove a subscription, please contact support. ::: ## Usage Reporting Whenever you use a paid offer in CrateDB Cloud, we collect your usage information, including the cluster compute and storage size and the number of nodes deployed. You can view this usage in the CrateDB Cloud Console, where you'll find a usage snapshot for the current calendar month, the current cost for the deployed service, and any available credits that might be applied to the current usage period. Be aware that the billing period might deviate from the shown calendar month usage. ## Credits Credits are another way to pay for CrateDB Cloud services and can be used together with other payment methods. Credits are applied to your account and are used up first before any other payment method is charged. This means that if credits are available, you will not be charged, nor will any usage be reported to your payment provider. You can see the remaining credits and their validity date on the Billing and Usage page. There is also the option to purchase more credits at a discount by [contacting](https://cratedb.com/contact) our sales team. :::{tip} **Free Trial Credits**: If you just signed up, you have the option to enable $200 of credits that can be used for any paid cluster. To enable these credits, you need to provide a valid payment method, which will only be charged if you have used up your credits. :::
(organization-api)= # API We offer an API to allow programmatic access to CrateDB Cloud. The API can be accessed by generating a key and secret in your [account page](https://console.cratedb.cloud/account/settings) (login required). The API keys are bound to the CrateDB Cloud user account that generates them. This means that any actions performed using the API keys will be executed as that user. Consequently, the API keys inherit the same permissions as the user, allowing the same level of access and control over the organization and its resources. ![Cloud Console New Api Key](../_assets/img/create-api-key.png) Click the *Generate new key* button to create your key. A popup with your key and secret will appear. Make sure to store your secret safely, as you cannot access it again. (api-access)= ## Access The key and secret can be used as HTTP Basic Auth credentials when calling the API, e.g. :::{code} console sh$ $ curl -s -u $your_key:$your_secret https://console.cratedb.cloud/api/v2/users/me ::: This example will return details of the current user: :::{code} console {"email":"some@example.com","hmac":"...","is_superuser":false,"name":"Some User","organization_id":"123","status":"active","uid":"uid","username":"some@example.com"} ::: (api-examples)= ## Examples The API is documented with [Swagger](https://console.cratedb.cloud/api/docs) (login required). It contains endpoints for: - Organizations - Regions - Projects - Clusters - Products - Users - Roles - Subscriptions - Audit logs It provides example requests with all the required parameters, expected responses, and all response codes. Access the API documentation [here](https://console.cratedb.cloud/api/docs) (login required).(cluster-import)= # Import The first thing you see in the "Import" tab is the history of your imports. You can see whether you imported from a URL or from a file, file name, table into which you imported, date, and status. By clicking "Show details" you can display details of a particular import. Clicking the "Import new data" button will bring up the Import page, where you can select the source of your data. If you don't have a dataset prepared, we also provide an example in the URL import section. It's the New York City taxi trip dataset for July of 2019 (about 6.3M records). (cluster-import-url)= ## Import from URL To import data, fill out the URL, name of the table which will be created and populated with your data, data format, and whether it is compressed. If a table with the chosen name doesn't exist, it will be automatically created. The following data formats are supported: - CSV (all variants) - JSON (JSON-Lines, JSON Arrays and JSON Documents) - Parquet Gzip compressed files are also supported. ![Cloud Console cluster upload from URL](../_assets/img/cluster-import-tab-url.png) (cluster-import-s3)= ## Import from private S3 bucket CrateDB Cloud allows convenient imports directly from S3-compatible storage. To import a file form bucket, provide the name of your bucket, and path to the file. The S3 Access Key ID, and S3 Secret Access Key are also needed. You can also specify the endpoint for non-AWS S3 buckets. Keep in mind that you may be charged for egress traffic, depending on your provider. There is also a volume limit of 10 GiB per file for S3 imports. The usual file formats are supported - CSV (all variants), JSON (JSON-Lines, JSON Arrays and JSON Documents), and Parquet. ![Cloud Console cluster upload from S3](../_assets/img/cluster-import-tab-s3.png) :::{note} It's important to make sure that you have the right permissions to access objects in the specified bucket. For AWS S3, your user should have a policy that allows GetObject access, for example: :::{code} { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowGetObject", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME/*" }] } ::: ::: (cluster-import-azure)= ## Import from Azure Blob Storage Container Importing data from private Azure Blob Storage containers is possible using a stored secret, which includes a secret name and either an Azure Storage Connection string or an Azure SAS Token URL. An admin user at the organization level can add this secret. You can specify a secret, a container, a table and a path in the form [/folder/my_file.parquet] As with other imports Parquet, CSV, and JSON files are supported. File size limitation for imports is 10 GiB per file. ![Cloud Console cluster upload from Azure Storage Container](../_assets/img/cluster-import-tab-azure.png) (cluster-import-globbing)= ## Importing multiple files Importing multiple files, also known as import globbing is supported in any s3-compatible blob storage. The steps are the same as if importing from S3, i.e. bucket name, path to the file and S3 ID/Secret. Importing multiple files from Azure Container/Blob Storage is also supported: `/folder/*.parquet` Files to be imported are specified by using the well-known [wildcard](https://en.wikipedia.org/wiki/Wildcard_character) notation, also known as "globbing". In computer programming, [glob](https://en.wikipedia.org/wiki/Glob_(programming)) patterns specify sets of filenames with wildcard characters. The following example would import all the files from the single specified day. :::{code} console /somepath/AWSLogs/123456678899/CloudTrail/us-east-1/2023/11/12/*.json.gz ::: ![Cloud Console cluster import globbing](../_assets/img/cluster-import-globbing.png) As with other imports, the supported file types are CSV, JSON, and Parquet. (cluster-import-file)= ## Import from file Uploading directly from your computer offers more control over your data. From the security point of view, you don't have to share the data on the internet just to be able to import it to your cluster. You also have more control over who has access to your data. Your files are temporarily uploaded to a secure location managed by Crate (an S3 bucket in AWS) which is not publicly accessible. The files are automatically deleted after 3 days. You may re-import the same file into multiple tables without having to re-upload it within those 3 days. Up to 5 files may be uploaded at the same time, with the oldest ones being automatically deleted if you upload more. ![Cloud Console cluster upload from file](../_assets/img/cluster-import-tab-file.png) As with other import, the supported file formats are: - CSV (all variants) - JSON (JSON-Lines, JSON Arrays and JSON Documents) - Parquet There is also a limit to file size, currently 1GB. (overview-cluster-import-schema-evolution)= ## Schema evolution Schema Evolution, available for all import types, enables automatic addition of new columns to existing tables during data import, eliminating the need to pre-define table schemas. This feature is applicable to both pre-existing tables and those created during the import process. It can be toggled via the 'Schema Evolution' checkbox on the import page. Note that Schema Evolution is limited to adding new columns; it does not modify existing ones. For instance, if an existing table has an 'OrderID' column of type **INTEGER**, and an import is attempted with Schema Evolution enabled for data where 'OrderID' column is of type **STRING**, the import job will fail due to type mismatch. ## File Format Limitations **CSV** files: 1. Comma, tab and pipe delimiters are supported. **JSON** files: The following formats are supported for JSON: 1. JSON Documents. Will insert as a single row in the table. :::{code} console { "id":1, "text": "example" } ::: 2. JSON Arrays. Will insert as a row per array item. :::{code} console [ { "id":1, "text": "example" }, { "id":2, "text": "example2" } ] ::: 3. JSON-Lines. Each line will insert as a row. :::{code} console {"id":1, "text": "example"} {"id":2, "text": "example2"} :::(cluster-export)= # Export The "Export" section allows users to download specific tables/views. When you first visit the Export tab, you can specify the name of a table/view, format (CSV, JSON, or Parquet) and whether you'd like your data to be gzip compressed (recommended for CSV and JSON files). :::{important} - Size limit for exporting is 1 GiB - Exports are held for 3 days, then automatically deleted ::: :::{note} **Limitations with Parquet**: Parquet is a highly compressed data format for very efficient storage of tabular data. Please note that for OBJECT and ARRAY columns in CrateDB, the exported data will be JSON encoded when saving to Parquet (effectively saving them as strings). This is due to the complexity of encoding structs and lists in the Parquet format, where determining the exact schema might not be possible. When re-importing such a Parquet file, make sure you pre-create the table with the correct schema. ::: (cluster-backups)= # Backups You can find the Backups page in the detailed view of your cluster and you can see and restore all existing backups here. By default, a backup is made every hour. The backups are kept for 14 days. We also keep the last 14 backups indefinitely, no matter the state of your cluster. The Backups tab provides a list of all your backups. By default, a backup is made every hour. ![Cloud Console cluster backups page](../_assets/img/cluster-backups.png) You can also control the schedule of your backups by clicking the *Edit backup schedule* button. ![Cloud Console cluster backups edit page](../_assets/img/cluster-backups-edit.png) Here you can create a custom schedule by selecting any number of hour slots. Backups will be created at selected times. At least one backup a day is mandatory. To restore a particular backup, click the *Restore* button. A popup window with a SQL statement will appear. Input this statement to your Admin UI console either by copy-pasting it, or clicking the *Run query in Admin UI*. The latter will bring you directly to the Admin UI console with the statement automatically pre-filled. ![Cloud Console cluster backups restore page](../_assets/img/cluster-backups-restore.png) You have a choice between restoring the cluster fully, or only specific tables. (cluster-cloning)= ## Cluster Cloning Cluster cloning is a process of duplicating all the data from a specific snapshot into a different cluster. Creating the new cluster isn't part of the cloning process, you need to create the target cluster yourself. You can clone a cluster from the Backups page. ![Cloud Console cluster backup snapshots](../_assets/img/cluster-backups.png) Choose a snapshot and click the *Clone* button. As with restoring a backup, you can choose between cloning the whole cluster, or only specific tables. ![Cloud Console cluster clone popup](../_assets/img/cluster-clone-popup.png) :::{note} Keep in mind that the full cluster clone will include users, views, privileges and everything else. Cloning also doesn't distinguish between cluster plans, meaning you can clone from CR2 to CR1 or any other variation. ::: (cluster-cloning-fail)= ## Failed cloning There are circumstances under which cloning can fail or behave unexpectedly. These are: - If you already have tables with the same names in the target cluster as in the source snapshot, the entire clone operation will fail. - There isn't enough storage left on the target cluster to accommodate the tables you're trying to clone. In this case, you might get an incomplete cloning as the cluster will run out of storage. - You're trying to clone an invalid or no longer existing snapshot. This can happen if you're cloning through [Croud](https://cratedb.com/docs/cloud/cli/en/latest/). In this case, the cloning will fail. - You're trying to restore a table that is not included in the snapshot. This can happen if you're restoring snapshots through [Croud](https://cratedb.com/docs/cloud/cli/en/latest/). In this case, the cloning will fail. When cloning fails, it is indicated by a banner in the cluster overview screen. ![Cloud Console cluster failed cloning](../_assets/img/cluster-clone-failed.png)(integrations-mongo-cdc)= # MongoDB CDC (Preview) CrateDB Cloud enables continuous data ingestion from MongoDB using Change Data Capture (CDC), providing seamless, real-time synchronization of your data. :::{caution} This integration is currently in preview and may have restricted availability. For more information, please [contact us](https://cratedb.com/contact). ::: ## Key Concepts The MongoDB CDC integration in CrateDB Cloud allows you to keep your data synchronized between your MongoDB Atlas cluster and your CrateDB Cloud cluster in real-time. ### How It Works The integration functions in two main stages: 1. **Initial Sync:** The integration performs a complete scan of your MongoDB collections, importing all existing data into your CrateDB Cloud cluster. 2. **Continuous Sync:** The integration uses MongoDB Change Streams to monitor changes in your MongoDB collections and syncs these updates to your CrateDB Cloud cluster in real-time, ensuring that your data remains current. ### Data Consistency and Mode For continuous sync, CrateDB Cloud uses MongoDB's **full document mode** to ensure data consistency. This mode guarantees that MongoDB returns the latest majority-committed version of the updated document. While receiving partial deltas is more efficient, full document mode provides robust functionality. Support for partial deltas may be added in the future to enhance performance and flexibility. --- ## Create a new Integration A MongoDB integration allows you to sync a single collection from a MongoDB Atlas cluster. You can reuse an existing connection across multiple integrations to continuously sync data from multiple MongoDB Atlas collections. Supported authentication methods: - MongoDB SCRAM Authentication - MongoDB X.509 Authentication ### Set Up MongoDB Atlas Authentication The following steps should be performed in the MongoDB Atlas UI. #### Step 1: Create a Custom Role 1. **Navigate to Database Access** Go to **Database Access** in the MongoDB Atlas UI for the cluster you want to connect to CrateDB Cloud. 2. **Add a Custom Role** Under **Custom Roles**, click **Add New Custom Role**. 3. **Set Up Read-Only Access** Assign the following actions or roles to the custom role: - `find` - `changeStream` - `collStats` Specify the databases and collections for these actions. You can update access permissions in the MongoDB Atlas UI later if needed. #### Step 2: Create a User Depending on whether you plan to use SCRAM or X.509 authentication, create a database user with one of the following methods: :::{tab} SCRAM Auhentication 1. **Navigate to Database Access** In the MongoDB Atlas UI, go to **Database Access** and click **Add New Database User**. 2. **Set Authentication Method** Choose **Password** as the authentication method and enter a username and password for the database user. 3. **Assign the Role** Under **Database User Privileges**, select the custom role created in Step 1. 4. **Copy User Credentials** Click **Add User**, and make sure to record the username and password. These credentials will be used later in the CrateDB Cloud Console. ::: :::{tab} x.509 Authentication 1. **Navigate to Database Access** In the MongoDB Atlas UI, go to **Database Access** and click **Add New Database User**. 2. **Set Authentication Method** Choose **Certificate** as the authentication method. 3. **Assign the Role** Under **Database User Privileges**, select the custom role created in Step 1. 4. **Save the Certificate** Click **Add User**, and store the certificate securely. This will be required later in the CrateDB Cloud Console. ::: #### Step 3: Configure IP Access To allow CrateDB Cloud to access your MongoDB Atlas cluster, you must add the CrateDB Cloud IP addresses to the IP Access List in MongoDB Atlas. 1. **Navigate to Network Access** In the MongoDB Atlas UI, go to **Network Access** from the left navigation. 2. **Add IP Address** Click **Add IP Address** and choose an IP address or range to allow access. For testing purposes, you can select **Allow Access from Anywhere**, but for production, it is recommended to specify only the required IPs. :::{note} The specific IP addresses depend on the region of your CrateDB Cloud cluster. These IP addresses can also be found in the **Connection Details** section of the CrateDB Cloud Console, just before you click **Test Connection** during the setup process. **Outbound IP Addresses**: | Cloud Provider | Region | IP Addresses | |----------------|---------------|---------------------------------| | Azure | East US 2 | `52.184.241.228/32`, `52.254.31.90/32` | | Azure | West Europe | `51.105.153.175/32`, `108.142.34.5/32` | | AWS | EU West 1 | `34.255.75.224` | | AWS | US East 1 | `54.197.229.58` | | AWS | US West 2 | `54.189.16.20` | | GCP | US Central 1 | `34.69.134.49` | ::: :::{note} To set up a PrivateLink connection for the Mongo CDC integration, please reach out to our support team. ::: #### Step 4: Access Connection String You'll need to provide the connection string for your MongoDB Atlas cluster so that CrateDB Cloud can connect to it. 1. **Navigate to Your Cluster** In the MongoDB Atlas UI, navigate to the cluster you want to connect to CrateDB Cloud. 2. **Click "Connect"** From the cluster view, click on **Connect**. 3. **Select "Connect Your Application"** Choose **Connect your application** as the connection method. 4. **Copy the Connection String** Copy the connection string provided in the MongoDB Atlas UI. It will look like this: ``` mongodb+srv://:@/?retryWrites=true&w=majority ``` --- :::{note} If you are using X.509 authentication, the connection string will look slightly different and will not include a username and password. Instead, it will reference the certificate file: ``` mongodb+srv:///?authMechanism=MONGODB-X509&retryWrites=true&w=majority ``` Make sure to upload the X.509 certificate file when configuring the connection in CrateDB Cloud. ::: ### Set Up Integration in CrateDB Cloud Follow these steps in the CrateDB Cloud Console to set up the MongoDB CDC integration: #### Step 1: Create an Integration 1. Navigate to the **Import** section in the CrateDB Cloud Console. 2. Click **Create Integration** and select **MongoDB** as the source type. #### Step 2: Configure Connection 1. Choose **Create New Connection** or select an existing one. 2. Fill in the following details: :::{tab} SCRAM Auhentication - **Connection Name**: Provide a unique name for the connection. - **Connection String**: Paste the connection string from MongoDB Atlas. - **Username**: Enter the database username (required for SCRAM). - **Password**: Enter the database password (required for SCRAM). - **Default Database**: Specify the default database to use for this connection. ::: :::{tab} X.509 Auhentication - **Connection Name**: Provide a unique name for the connection. - **Connection String**: Paste the connection string from MongoDB Atlas. - **Certificate**: Upload the X.509 certificate file. - **Default Database**: Specify the default database to use for this connection. ::: #### Step 3: Test the Connection Click **Test Connection** to verify CrateDB Cloud can connect to your MongoDB Atlas cluster. Resolve any issues if the test fails. #### Step 4: Select Collection Enter the database and collection name from your MongoDB Atlas cluster, that you want to sync with CrateDB Cloud. #### Step 5: Select Target Table 1. Specify the target table in your CrateDB Cloud cluster where the data will be synced. 2. MongoDB records will be inserted into an object column called `document`. 3. Select the object type for the column: - **`dynamic`**: Allows indexing and columnar storage for faster querying. - **`ignored`**: Prevents type conflicts in CrateDB if your source data lacks a strict schema. :::{note} If your source data doesn't follow a strict schema, select `ignored` to avoid type conflicts. However, selecting `dynamic` provides faster query performance by utilizing indexes and columnar storage. ::: #### Step 6: Configure Integration Settings 1. Enter a name for the integration. 2. Select the integration mode: - **Full Load Only**: Imports the data once but doesn’t sync changes. - **Full Load and CDC**: Imports the data and syncs changes in real-time. - **CDC Only**: Syncs only new changes in real-time without importing existing data. #### Step 7: Create the Integration Click **Create Integration** to finalize the setup. CrateDB Cloud will now sync your MongoDB data based on the selected settings. --- ## Limitations The MongoDB CDC integration is available as a preview. The feature is stable enough for broader use but may still have limitations, known issues, or incomplete features. While suitable for many use cases, it is not yet recommended for mission-critical workloads. ### Column Name Restrictions Column or property names containing square brackets (`[]`) are not supported and are replaced with `__openbrk__` and `__closebrk__` respectively. Likewise, column names containing dots (`.`) are not supported and are replaced with (`__dot__`). :::{warning} This behavior may change in future releases. ::: ### Unsupported Data Types The following MongoDB data types are not supported in the CrateDB Cloud MongoDB CDC integration: - **Long Strings** exceeding 32,766 characters are replaced with a placeholder value. - **Binary data types** other than UUIDs, which are converted to `TEXT` and **vectors**, which are converted to `ARRAY`s of numbers. - The `Decimal128` data type is not supported and is converted to a string, as CrateDB does not support a decimal data type... _user-defined-functions: ====================== User-defined functions ====================== .. rubric:: Table of contents .. contents:: :local: .. _udf-create-replace: ``CREATE OR REPLACE`` ===================== CrateDB supports user-defined :ref:`functions `. See :ref:`ref-create-function` for a full syntax description. ``CREATE FUNCTION`` defines a new function:: cr> CREATE FUNCTION my_subtract_function(integer, integer) ... RETURNS integer ... LANGUAGE JAVASCRIPT ... AS 'function my_subtract_function(a, b) { return a - b; }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('my_subtract_function(1::integer, 1::integer)') :: cr> SELECT doc.my_subtract_function(3, 1) AS col; +-----+ | col | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) ``CREATE OR REPLACE FUNCTION`` will either create a new function or replace an existing function definition:: cr> CREATE OR REPLACE FUNCTION log10(bigint) ... RETURNS double precision ... LANGUAGE JAVASCRIPT ... AS 'function log10(a) {return Math.log(a)/Math.log(10); }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('log10(1::bigint)') :: cr> SELECT doc.log10(10) AS col; +-----+ | col | +-----+ | 1.0 | +-----+ SELECT 1 row in set (... sec) It is possible to use named function arguments in the function signature. For example, the ``calculate_distance`` function signature has two ``geo_point`` arguments named ``start`` and ``end``:: cr> CREATE OR REPLACE FUNCTION calculate_distance("start" geo_point, "end" geo_point) ... RETURNS real ... LANGUAGE JAVASCRIPT ... AS 'function calculate_distance(start, end) { ... return Math.sqrt( ... Math.pow(end[0] - start[0], 2), ... Math.pow(end[1] - start[1], 2)); ... }'; CREATE OK, 1 row affected (... sec) .. NOTE:: Argument names are used for query documentation purposes only. You cannot reference arguments by name in the function body. Optionally, a schema-qualified function name can be defined. If you omit the schema, the current session schema is used:: cr> CREATE OR REPLACE FUNCTION my_schema.log10(bigint) ... RETURNS double precision ... LANGUAGE JAVASCRIPT ... AS 'function log10(a) { return Math.log(a)/Math.log(10); }'; CREATE OK, 1 row affected (... sec) .. NOTE:: In order to improve the PostgreSQL server compatibility CrateDB allows the creation of user defined functions against the :ref:`postgres-pg_catalog` schema. However, the creation of user defined functions against the read-only :ref:`system-information` and :ref:`information_schema` schemas is prohibited. .. _udf-supported-types: Supported types =============== Function arguments and return values can be any of the supported :ref:`data types `. The values passed into a function must strictly correspond to the specified argument data types. .. NOTE:: The value returned by the function will be casted to the return type provided in the definition if required. An exception will be thrown if the cast is not successful. .. _udf-overloading: Overloading =========== Within a specific schema, you can overload functions by defining functions with the same name but a different set of arguments:: cr> CREATE FUNCTION my_schema.my_multiply(integer, integer) ... RETURNS integer ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b) { return a * b; }'; CREATE OK, 1 row affected (... sec) This would overload the ``my_multiply`` function with different argument types:: cr> CREATE FUNCTION my_schema.my_multiply(bigint, bigint) ... RETURNS bigint ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b) { return a * b; }'; CREATE OK, 1 row affected (... sec) This would overload the ``my_multiply`` function with more arguments:: cr> CREATE FUNCTION my_schema.my_multiply(bigint, bigint, bigint) ... RETURNS bigint ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b, c) { return a * b * c; }'; CREATE OK, 1 row affected (... sec) .. CAUTION:: It is considered bad practice to create functions that have the same name as the CrateDB built-in functions. .. NOTE:: If you call a function without a schema name, CrateDB will look it up in the built-in functions first and only then in the user-defined functions available in the :ref:`search_path `. **Therefore a built-in function with the same name as a user-defined function will hide the latter, even if it contains a different set of arguments.** However, such functions can still be called if the schema name is explicitly provided. .. _udf-determinism: Determinism =========== .. CAUTION:: User-defined functions need to be deterministic, meaning that they must always return the same result value when called with the same argument values, because CrateDB might cache the returned values and reuse the value if the function is called multiple times with the same arguments. .. _udf-drop-function: ``DROP FUNCTION`` ================= Functions can be dropped like this:: cr> DROP FUNCTION doc.log10(bigint); DROP OK, 1 row affected (... sec) Adding ``IF EXISTS`` prevents from raising an error if the function doesn't exist:: cr> DROP FUNCTION IF EXISTS doc.log10(integer); DROP OK, 1 row affected (... sec) Optionally, argument names can be specified within the drop statement:: cr> DROP FUNCTION IF EXISTS doc.calculate_distance(start_point geo_point, end_point geo_point); DROP OK, 1 row affected (... sec) Optionally, you can provide a schema:: cr> DROP FUNCTION my_schema.log10(bigint); DROP OK, 1 row affected (... sec) .. _udf-supported-languages: Supported languages =================== Currently, CrateDB only supports JavaScript for user-defined functions. .. _udf-js: JavaScript ---------- The user defined function JavaScript is compatible with the `ECMAScript 2019`_ specification. CrateDB uses the `GraalVM JavaScript`_ engine as a JavaScript (ECMAScript) language execution runtime. The `GraalVM JavaScript`_ engine is a Java application that works on the stock Java Virtual Machines (VMs). The interoperability between Java code (host language) and JavaScript user-defined functions (guest language) is guaranteed by the `GraalVM Polyglot API`_. Please note: CrateDB does not use the GraalVM JIT compiler as optimizing compiler. However, the `stock host Java VM JIT compilers`_ can JIT-compile, optimize, and execute the GraalVM JavaScript codebase to a certain extent. The execution context for guest JavaScript is created with restricted privileges to allow for the safe execution of less trusted guest language code. The guest language application context for each user-defined function is created with default access modifiers, so any access to managed resources is denied. The only exception is the host language interoperability configuration which explicitly allows access to Java lists and arrays. Please refer to `GraalVM Security Guide`_ for more detailed information. Also, even though user-defined functions implemented with ECMA-compliant JavaScript, objects that are normally accessible with a web browser (e.g. ``window``, ``console``, and so on) are not available. .. NOTE:: GraalVM treats objects provided to JavaScript user-defined functions as close as possible to their respective counterparts and therefore by default only a subset of prototype functions are available in user-defined functions. For CrateDB 4.6 and earlier the object prototype was disabled. Please refer to the `GraalVM JavaScript Compatibility FAQ`_ to learn more about the compatibility. .. _udf-js-supported-types: JavaScript supported types .......................... JavaScript functions can handle all CrateDB data types. However, for some return types the function output must correspond to the certain format. If a function requires ``geo_point`` as a return type, then the JavaScript function must return a ``double precision`` array of size 2, ``WKT`` string or ``GeoJson`` object. Here is an example of a JavaScript function returning a ``double array``:: cr> CREATE FUNCTION rotate_point(point geo_point, angle real) ... RETURNS geo_point ... LANGUAGE JAVASCRIPT ... AS 'function rotate_point(point, angle) { ... var cos = Math.cos(angle); ... var sin = Math.sin(angle); ... var x = cos * point[0] - sin * point[1]; ... var y = sin * point[0] + cos * point[1]; ... return [x, y]; ... }'; CREATE OK, 1 row affected (... sec) Below is an example of a JavaScript function returning a ``WKT`` string, which will be cast to ``geo_point``:: cr> CREATE FUNCTION symmetric_point(point geo_point) ... RETURNS geo_point ... LANGUAGE JAVASCRIPT ... AS 'function symmetric_point (point, angle) { ... var x = - point[0], ... y = - point[1]; ... return "POINT (\" + x + \", \" + y +\")"; ... }'; CREATE OK, 1 row affected (... sec) Similarly, if the function specifies the ``geo_shape`` return data type, then the JavaScript function should return a ``GeoJson`` object or ``WKT`` string:: cr> CREATE FUNCTION line("start" array(double precision), "end" array(double precision)) ... RETURNS object ... LANGUAGE JAVASCRIPT ... AS 'function line(start, end) { ... return { "type": "LineString", "coordinates" : [start_point, end_point] }; ... }'; CREATE OK, 1 row affected (... sec) .. NOTE:: If the return value of the JavaScript function is ``undefined``, it is converted to ``NULL``. .. _udf-js-numbers: Working with ``NUMBERS`` ........................ The JavaScript engine interprets numbers as ``java.lang.Double``, ``java.lang.Long``, or ``java.lang.Integer``, depending on the computation performed. In most cases, this is not an issue, since the return type of the JavaScript function will be cast to the return type specified in the ``CREATE FUNCTION`` statement, although cast might result in a loss of precision. However, when you try to cast ``DOUBLE PRECISION`` to ``TIMESTAMP WITH TIME ZONE``, it will be interpreted as UTC seconds and will result in a wrong value:: cr> CREATE FUNCTION utc(bigint, bigint, bigint) ... RETURNS TIMESTAMP WITH TIME ZONE ... LANGUAGE JAVASCRIPT ... AS 'function utc(year, month, day) { ... return Date.UTC(year, month, day, 0, 0, 0); ... }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('utc(1::bigint, 1::bigint, 1::bigint)') :: cr> SELECT date_format(utc(2016,04,6)) as epoque; +------------------------------+ | epoque | +------------------------------+ | 48314-07-22T00:00:00.000000Z | +------------------------------+ SELECT 1 row in set (... sec) .. hide: cr> DROP FUNCTION utc(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) To avoid this behavior, the numeric value should be divided by 1000 before it is returned:: cr> CREATE FUNCTION utc(bigint, bigint, bigint) ... RETURNS TIMESTAMP WITH TIME ZONE ... LANGUAGE JAVASCRIPT ... AS 'function utc(year, month, day) { ... return Date.UTC(year, month, day, 0, 0, 0)/1000; ... }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('utc(1::bigint, 1::bigint, 1::bigint)') :: cr> SELECT date_format(utc(2016,04,6)) as epoque; +-----------------------------+ | epoque | +-----------------------------+ | 2016-05-06T00:00:00.000000Z | +-----------------------------+ SELECT 1 row in set (... sec) .. hide: cr> DROP FUNCTION my_subtract_function(integer, integer); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(integer, integer); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(bigint, bigint); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION rotate_point(point geo_point, angle real); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION symmetric_point(point geo_point); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION line(start_point array(double precision), end_point array(double precision)); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION utc(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) .. _ECMAScript 2019: https://262.ecma-international.org/10.0/index.html .. _GraalVM JavaScript: https://www.graalvm.org/reference-manual/js/ .. _GraalVM JavaScript Compatibility FAQ: https://www.graalvm.org/latest/reference-manual/js/JavaScriptCompatibility/ .. _GraalVM Polyglot API: https://www.graalvm.org/reference-manual/embed-languages/ .. _GraalVM Security Guide: https://www.graalvm.org/security-guide/ .. _stock host Java VM JIT compilers: https://www.graalvm.org/reference-manual/js/RunOnJDK/(integrate)= # Integrations You have a variety of options to connect and integrate 3rd-party applications, mostly using [CrateDB's PostgreSQL interface]. This documentation section lists applications, frameworks, and libraries, which can be used together with CrateDB, and outlines how to use them optimally. ```{toctree} :maxdepth: 2 ide orm df etl/index cdc/index metrics/index visualize/index bi/index lineage/index testing ``` :::{tip} Please also visit the [Overview of CrateDB integration tutorials]. ::: [CrateDB's PostgreSQL interface]: inv:crate-reference#interface-postgresql [Overview of CrateDB integration tutorials]: https://community.cratedb.com/t/overview-of-cratedb-integration-tutorials/1015We've created many integration-focused tutorials to help you use CrateDB with other awesome tools and libraries.👇 All tutorials require the working installation of CrateDB. |**Tool** | **Articles/Tutorials** | C| |--- | --- | ---| |[Apache Airflow](https://airflow.apache.org/) / [Astronomer](https://www.astronomer.io/) | - https://community.cratedb.com/t/cratedb-and-apache-airflow-automating-data-export-to-s3/901
- https://community.cratedb.com/t/cratedb-and-apache-airflow-implementation-of-data-retention-policy/913
- https://community.cratedb.com/t/cratedb-and-apache-airflow-building-a-data-ingestion-pipeline/926
- https://community.cratedb.com/t/cratedb-and-apache-airflow-building-a-hot-cold-storage-data-retention-policy/934 | | |[Apache Arrow](https://arrow.apache.org) | https://community.cratedb.com/t/import-parquet-files-into-cratedb-using-apache-arrow-and-sqlalchemy/1161 | | |[Apache Kafka](https://kafka.apache.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/kafka-connect.html | | |[Apache NiFi](https://nifi.apache.org/) | https://community.cratedb.com/t/connecting-to-cratedb-from-apache-nifi/647 | | |[Apache Spark](https://spark.apache.org/) | - https://community.cratedb.com/t/getting-started-with-apache-spark-and-cratedb-a-step-by-step-tutorial/1595
- https://community.cratedb.com/t/introduction-to-azure-databricks-with-cratedb/764
- https://github.com/crate/cratedb-examples/tree/main/by-dataframe/spark/scala-http | | |[Apache Superset](https://github.com/apache/superset) / [Preset](https://preset.io/) | - https://community.cratedb.com/t/set-up-apache-superset-with-cratedb/1716
- https://crate.io/blog/use-cratedb-and-apache-superset-for-open-source-data-warehousing-and-visualization
- [Introduction to Time-Series Visualization in CrateDB and Superset](https://crate.io/blog/introduction-to-time-series-visualization-in-cratedb-and-superset) | | |[Balena](https://www.balena.io/) | https://community.cratedb.com/t/deploying-cratedb-on-balena-io/1067 | | |[Cluvio](https://www.cluvio.com/) | https://community.cratedb.com/t/data-analysis-with-cluvio-and-cratedb/1571 | | |[Dapr](https://dapr.io/) | https://community.cratedb.com/t/connecting-to-cratedb-from-dapr/660 | | |[DataGrip](https://www.jetbrains.com/datagrip/) | https://cratedb.com/docs/guide/integrate/datagrip/ | | |[Datashader](https://datashader.org/) | [CrateDB Time Series Exploration and Visualization](https://github.com/crate/cratedb-examples/tree/amo/cloud-datashader/topic/timeseries/explore) | | |[Dask](https://www.dask.org/) | https://community.cratedb.com/t/guide-to-efficient-data-ingestion-to-cratedb-with-pandas-and-dask/1482 | | |[DBeaver](https://dbeaver.io/about/) | https://crate.io/blog/cratedb-dbeaver | | |[dbt](https://github.com/dbt-labs/dbt-core) | https://community.cratedb.com/t/using-dbt-with-cratedb/1566 | | |[Debezium](https://debezium.io/) | https://community.cratedb.com/t/replicating-data-from-other-databases-to-cratedb-with-debezium-and-kafka/1388 | | |[Explo](https://www.explo.co/) | https://crate.io/blog/introduction-to-time-series-visualization-in-cratedb-and-explo | | |[JMeter](https://jmeter.apache.org) | https://community.cratedb.com/t/jmeter-jdbc-connection-to-cratedb/1051/2?u=jayeff | | |[Grafana](https://grafana.com/) | - https://crate.io/blog/visualizing-time-series-data-with-grafana-and-cratedb
- https://community.cratedb.com/t/monitoring-an-on-premises-cratedb-cluster-with-prometheus-and-grafana/1236 | | |[Kestra.io](https://kestra.io/) | https://community.cratedb.com/t/guide-to-cratedb-data-pipelines-with-kestra-io/1400 | | |[LangChain](https://www.langchain.com/) | https://community.cratedb.com/t/how-to-set-up-langchain-with-cratedb/1576 | | |[Locust](https://locust.io) | https://community.cratedb.com/t/loadtesting-cratedb-using-locust/1686 | | |[Meltano](https://meltano.com/) | [Meltano Examples](https://github.com/crate/cratedb-examples/tree/amo/meltano/framework/singer-meltano) | | |[Metabase](https://www.metabase.com/) | - https://community.cratedb.com/t/visualizing-data-with-metabase/1401
- https://community.cratedb.com/t/demo-of-metabase-and-cratedb-getting-started/1436 | | |[Node-RED](https://nodered.org/) | https://community.cratedb.com/t/ingesting-mqtt-messages-into-cratedb-using-node-red/803 | | |[pandas](https://pandas.pydata.org/) | - https://community.cratedb.com/t/from-data-storage-to-data-analysis-tutorial-on-cratedb-and-pandas-2/1440
- https://community.cratedb.com/t/automating-financial-data-collection-and-storage-in-cratedb-with-python-and-pandas/916
- https://community.cratedb.com/t/importing-parquet-files-into-cratedb-using-apache-arrow-and-sqlalchemy/1161
- https://community.cratedb.com/t/guide-to-efficient-data-ingestion-from-pandas-to-cratedb/1541 | | |[PowerBI](https://powerbi.microsoft.com/en-us/) | https://crate.io/docs/crate/howtos/en/latest/integrations/powerbi-desktop.html
https://crate.io/docs/crate/howtos/en/latest/integrations/powerbi-gateway.html | | |[Prefect](https://www.prefect.io/) | https://community.cratedb.com/t/building-seamless-data-pipelines-made-easy-combining-prefect-and-cratedb/1555 | | |[Prometheus](https://prometheus.io/) | - https://community.cratedb.com/t/cratedb-and-prometheus-for-long-term-metrics-storage/1012
- https://community.cratedb.com/t/monitoring-an-on-premises-cratedb-cluster-with-prometheus-and-grafana/1236 | | |[PyCaret](https://pycaret.org/) | [AutoML with PyCaret and CrateDB](https://github.com/crate/cratedb-examples/tree/main/topic/machine-learning/automl) | | |[R](https://www.r-project.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/r.html | | |[Rill](https://www.rilldata.com/) | https://community.cratedb.com/t/introducing-rill-and-bi-as-code-with-cratedb-cloud/1718 | | |[Rsyslog](https://www.rsyslog.com/) | https://community.cratedb.com/t/storing-server-logs-on-cratedb-for-fast-search-and-aggregations/1562 | | |[SQLPad](https://crate.io/blog/use-cratedb-with-sqlpad-as-a-self-hosted-query-tool-and-visualizer) | https://crate.io/blog/use-cratedb-with-sqlpad-as-a-self-hosted-query-tool-and-visualizer | | |[StreamSets](https://crate.io/docs/crate/howtos/en/latest/integrations/streamsets.html) | https://crate.io/docs/crate/howtos/en/latest/integrations/streamsets.html | | |[Tableau](https://www.tableau.com/) | https://community.cratedb.com/t/using-cratedb-with-tableau/1192 | | |[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) | https://crate.io/blog/use-cratedb-with-telegraf-an-agent-for-collecting-reporting-metrics | | |[TensorFlow](https://www.tensorflow.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/ml-dist.html | | |[Terraform](https://www.terraform.io/) | https://community.cratedb.com/t/deploying-cratedb-to-the-cloud-via-terraform/849 | | |[Trino](https://trino.io/) | https://community.cratedb.com/t/connecting-to-cratedb-using-trino/993 | |
Page not found · GitHub · GitHub
Skip to content
404 “This is not the web page you are looking for”