Things to remember when working with CrateDB are: - CrateDB is a distributed database written in Java, where individual nodes form a database cluster, using a shared-nothing architecture. - CrateDB brings together fundamental components to manage big data after the Hadoop and Spark batch-processing era, more like Teradata, BigQuery and Snowflake are doing it. - Clients can connect to CrateDB using HTTP or the PostgreSQL wire protocol. - The default TCP ports of CrateDB are 4200 for the HTTP interface and 5432 for the PostgreSQL interface. - The language of choice after connecting to CrateDB is to use SQL, mostly compatible with PostgreSQL's SQL dialect. - The data storage layer is based on Lucene, the data distribution layer was inspired by Elasticsearch. - Storage concepts of CrateDB include partitioning and sharding to manage data larger than fitting on a single machine. - CrateDB Cloud offers a managed option for running CrateDB and provides additional features like automated backups, data ingest / ETL utilities, or scheduling recurrent jobs. - Get started with CrateDB Cloud at `https://console.cratedb.cloud`. - CrateDB also provides an option to run it on your premises, optimally by using its Docker/OCI image `docker.io/crate`. Nightly images are available per `docker.io/crate/crate:nightly`... image:: docs/_static/crate-logo.svg :alt: CrateDB :target: https://cratedb.com ---- .. image:: https://github.com/crate/crate/workflows/CrateDB%20SQL/badge.svg?branch=master :target: https://github.com/crate/crate/actions?query=workflow%3A%22CrateDB+SQL%22 .. image:: https://img.shields.io/badge/docs-latest-brightgreen.svg :target: https://cratedb.com/docs/crate/reference/en/latest/ .. image:: https://img.shields.io/badge/container-docker-green.svg :target: https://hub.docker.com/_/crate/ | `Help us improve CrateDB by taking our User Survey! `_ About ===== CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of data in real-time. CrateDB offers the `benefits`_ of an SQL database *and* the scalability and flexibility typically associated with NoSQL databases. Modest CrateDB clusters can ingest tens of thousands of records per second without breaking a sweat. You can run ad-hoc queries using `standard SQL`_. CrateDB's blazing-fast distributed query execution engine parallelizes query workloads across the whole cluster. CrateDB is well suited to `containerization`_, can be `scaled horizontally`_ using ephemeral virtual machines (e.g., `Kubernetes`_, `AWS`_, and `Azure`_) with `no shared state`_. You can deploy and run CrateDB on any sort of network — from personal computers to `multi-region hybrid clouds and the edge`_. Features ======== - Use `standard SQL`_ via the `PostgreSQL wire protocol`_ or an `HTTP API`_. - Dynamic table schemas and queryable objects provide document-oriented features in addition to the relational features of SQL. - Support for time-series data, real-time full-text search, geospatial data types and search capabilities. - Horizontally scalable, highly available and fault-tolerant clusters that run very well in virtualized and containerized environments. - Extremely fast distributed query execution. - Auto-partitioning, auto-sharding, and auto-replication. - Self-healing and auto-rebalancing. - `User-defined functions`_ (UDFs) can be used to extend the functionality of CrateDB. Screenshots =========== CrateDB provides an `Admin UI`_: .. image:: crate-admin.gif :alt: Screenshots of the CrateDB Admin UI Try CrateDB =========== Run CrateDB via the official `Docker Image`_: .. code-block:: console sh$ docker run --publish 4200:4200 --publish 5432:5432 --env CRATE_HEAP_SIZE=1g crate -Cdiscovery.type=single-node Or visit the `installation documentation`_ to see all the available download and install options. Once you're up and running, head over to the `introductory docs`_. To interact with CrateDB, you can use the Admin UI `sql console`_ or the `CrateDB shell`_ CLI tool. Alternatively, review the list of recommended `clients and tools`_ that work with CrateDB. For container-specific documentation, check out the `CrateDB on Docker how-to guide`_ or the `CrateDB on Kubernetes how-to guide`_. Contributing ============ This project is primarily maintained by `Crate.io`_, but we welcome community contributions! See the `developer docs`_ and the `contribution docs`_ for more information. Security ======== The CrateDB team and community take security bugs seriously. We appreciate your efforts to `responsibly disclose`_ your findings, and will make every effort to acknowledge your contributions. If you think you discovered a security flaw, please follow the guidelines at `SECURITY.md`_. Help ==== Looking for more help? - Try one of our `beginner tutorials`_, `how-to guides`_, or consult the `reference manual`_. - Check out our `support channels`_. - `Crate.io`_ also offers `CrateDB Cloud`_, a fully-managed *CrateDB Database as a Service* (DBaaS). The `CrateDB Cloud Tutorials`_ will get you started. .. _Admin UI: https://cratedb.com/docs/crate/admin-ui/ .. _AWS: https://cratedb.com/docs/crate/tutorials/en/latest/cloud/aws/index.html .. _Azure: https://cratedb.com/docs/crate/tutorials/en/latest/cloud/azure/index.html .. _beginner tutorials: https://cratedb.com/docs/crate/tutorials/ .. _benefits: https://cratedb.com/product#compare .. _clients and tools: https://cratedb.com/docs/crate/clients-tools/ .. _containerization: https://cratedb.com/docs/crate/tutorials/en/latest/containers/docker.html .. _contribution docs: CONTRIBUTING.rst .. _Crate.io: https://cratedb.com/company/team .. _CrateDB clients and tools: https://cratedb.com/docs/crate/clients-tools/ .. _CrateDB Cloud Tutorials: https://cratedb.com/docs/cloud/ .. _CrateDB Cloud: https://cratedb.com/product/pricing .. _CrateDB on Docker how-to guide: https://cratedb.com/docs/crate/tutorials/en/latest/containers/docker.html .. _CrateDB on Kubernetes how-to guide: https://cratedb.com/docs/crate/tutorials/en/latest/containers/kubernetes/index.html .. _CrateDB shell: https://cratedb.com/docs/crate/crash/ .. _developer docs: devs/docs/index.rst .. _Docker image: https://hub.docker.com/_/crate/ .. _document-oriented: https://en.wikipedia.org/wiki/Document-oriented_database .. _Dynamic table schemas: https://cratedb.com/docs/crate/reference/en/master/general/ddl/column-policy.html .. _fulltext search: https://cratedb.com/docs/crate/reference/en/latest/general/dql/fulltext.html .. _geospatial features: https://cratedb.com/docs/crate/reference/en/master/general/dql/geo.html .. _how-to guides: https://cratedb.com/docs/crate/howtos/ .. _HTTP API: https://cratedb.com/docs/crate/reference/en/latest/interfaces/http.html .. _installation documentation: https://cratedb.com/docs/crate/tutorials/en/latest/basic/index.html .. _introductory docs: https://cratedb.com/docs/crate/tutorials/ .. _Kubernetes: https://cratedb.com/docs/crate/tutorials/en/latest/containers/kubernetes/index.html .. _multi-region hybrid clouds and the edge: https://cratedb.com/docs/cloud/en/latest/tutorials/edge/index.html .. _no shared state: https://en.wikipedia.org/wiki/Shared-nothing_architecture .. _PostgreSQL wire protocol: https://cratedb.com/docs/crate/reference/en/latest/interfaces/postgres.html .. _queryable objects: https://cratedb.com/docs/crate/reference/en/master/general/dql/selects.html#container-data-types .. _reference manual: https://cratedb.com/docs/crate/reference/ .. _relational: https://en.wikipedia.org/wiki/Relational_model .. _responsibly disclose: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure .. _scaled horizontally: https://stackoverflow.com/questions/11707879/difference-between-scaling-horizontally-and-vertically-for-databases .. _SECURITY.md: https://github.com/crate/crate/blob/master/SECURITY.md .. _sql console: https://cratedb.com/docs/crate/admin-ui/en/latest/console.html#sql-console .. _standard SQL: https://cratedb.com/docs/crate/reference/en/latest/sql/index.html .. _support channels: https://cratedb.com/support .. _time-series data: https://cratedb.com/docs/crate/howtos/en/latest/getting-started/normalize-intervals.html .. _user-defined functions: https://cratedb.com/docs/crate/reference/en/latest/general/user-defined-functions.html--- orphan: true --- # CrateDB Documentation Welcome to the official CrateDB Documentation. Whether you are a developer, database administrator, or just starting your journey with CrateDB, our documentation provides the information and knowledge needed to build real-time analytics and hybrid search applications that leverage CrateDB's unique features. :::{rubric} Benefits ::: * In a unified data platform approach, CrateDB includes analyzing relational, JSON, time-series, geospatial, full-text, and vector data within a single system, eliminating the need for multiple databases. * The fully distributed SQL query engine, built on top of Apache Lucene, and inheriting technologies from Elasticsearch/OpenSearch, provides performant aggregations and advanced SQL features like JOINs and CTEs on large datasets of semi-structured data. * Real-time indexing automatically indexes all columns, including nested structures, as data is ingested, eliminating the need to worry about indexing strategy. * The flexible data schema dynamically adapts based on the data you ingest, offering seamless integration and instant readiness for analysis. * Columnar storage enables fast search query and aggregation performance. * PostgreSQL wire protocol compatibility and a HTTP interface provide versatile integration capabilities. * AI-ready: The vector store subsystem integrates well with an extensive 3rd party ecosystem of AI/ML frameworks for advanced data analysis and data-driven decisions. ::::::{grid} 1 :margin: 1 :padding: 2 :::{grid-item-card} {material-outlined}`rocket_launch;1.7em` CrateDB Cloud :link: cloud-docs-index :link-type: ref :link-alt: CrateDB Cloud :padding: 2 :class-title: sd-fs-5 Start with a fully managed CrateDB instance to accelerate and simplify working with analytical data. CrateDB Cloud enables seamless deployment, monitoring, backups, and scaling of CrateDB clusters on AWS, Azure or GCPs, eliminating the need for direct database management. With CrateDB Cloud, you can skip infrastructure setup and focus on delivering value for your business with a query console, SQL Scheduler, table policies and various connectors to import data. +++ ```{button-link} https://cratedb.com/docs/cloud/tutorials/quick-start.html :color: primary :expand: **Start forever free cluster with 8 GB of storage** ``` ::: :::::{grid-item} :margin: 0 :padding: 2 ::::{grid} 2 :margin: 0 :padding: 0 :::{grid-item-card} {material-outlined}`lightbulb;1.7em` Database Features :link: https://cratedb.com/docs/guide/feature/ :link-alt: Database Features :class-title: sd-fs-5 Explore all functional, operational and advanced features of CrateDB at a glance. ::: :::{grid-item-card} {material-outlined}`auto_stories;1.7em` Database Manual :link: https://cratedb.com/docs/reference/ :link-alt: Database Manual :class-title: sd-fs-5 Learn core CrateDB concepts, including data modeling, querying data, aggregations, sharding, and more. ::: :::: ::::: :::{grid-item-card} {material-outlined}`link;1.7em` Client Libraries :link: https://cratedb.com/docs/crate/clients-tools/en/latest/connect/ :link-alt: CrateDB: Client Drivers and Libraries :padding: 2 :class-title: sd-fs-5 Learn how to connect your applications using database drivers, libraries, adapters, and connectors. CrateDB supports both the [HTTP protocol] and the [PostgreSQL wire protocol], ensuring compatibility with many PostgreSQL clients. Through corresponding drivers and adapters, CrateDB is compatible with [ODBC], [JDBC], and other database API specifications. ::: :::::: ## Learn :::{rubric} Videos ::: ::::{card} Today's data challenges and a high level overview of CrateDB :class-title: sd-fs-4 :class-body: sd-text-center :class-footer: sd-fs-6 :::{youtube} cByAOsaYddQ ::: +++ _Webinar: Turbocharge your aggregations, search & AI models & get real-time insights._ :::{div} text-smaller Discover CrateDB, the leading real-time analytics database. It provides the flexibility, speed, and scalability necessary to master today's data challenges. Watch this video to learn how CrateDB empowers you with real-time insights into your data to fuel advanced analytics, search, and AI models—enabling informed decisions that drive meaningful impact. ::: :::: ::::{card} CrateDB Videos curated by Simon Prickett :class-footer: sd-fs-6 Simon leads Developer Relations at CrateDB. Here, he is [sharing a playlist of videos] he has been part of that will show you what CrateDB is and how you can use it for a variety of projects. Make sure you also do not miss relevant [CrateDB customer stories]. :::: :::{rubric} Introduction ::: Learn about the fundamentals of CrateDB, guided and self-guided. ::::{grid} 2 2 4 4 :padding: 0 :::{grid-item-card} :link: https://cratedb.com/docs/guide/getting-started.html :link-alt: Getting started with CrateDB :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Getting Started ^^^ {material-outlined}`not_started;3.5em` +++ Learn how to interact with the database for the first time. ::: :::{grid-item-card} :link: https://cratedb.com/docs/guide/ :link-alt: The CrateDB Guide :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller The CrateDB Guide ^^^ {material-outlined}`hiking;3.5em` +++ Guides and tutorials about how to use CrateDB in practice. ::: :::{grid-item-card} :link: https://learn.cratedb.com/ :link-alt: The CrateDB Academy :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Academy Courses ^^^ {material-outlined}`school;3.5em` +++ A learning hub dedicated to data enthusiasts. ::: :::{grid-item-card} :link: https://community.cratedb.com/ :link-alt: The CrateDB Community Portal :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold sd-text-capitalize :class-body: sd-text-center sd-fs-5 :class-footer: text-smaller Community Portal ^^^ {material-outlined}`groups;3.5em` +++ A hangout place for members of the CrateDB community. ::: :::: :::{rubric} Admin Tools ::: Learn about the fundamental tools that support working directly with CrateDB. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} Admin UI :link: https://cratedb.com/docs/crate/admin-ui/ :link-alt: The CrateDB Admin UI :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`admin_panel_settings;3.5em` +++ Learn about CrateDB's included web administration interface. ::: :::{grid-item-card} Crash CLI :link: https://cratedb.com/docs/crate/crash/ :link-alt: The Crash CLI :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`terminal;3.5em` +++ A command-line interface (CLI) tool for working with CrateDB. ::: :::: :::{rubric} Drivers and Integrations ::: Learn about database client libraries, drivers, adapters, connectors, and integrations with 3rd-party applications and frameworks. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} Ecosystem Catalog :link: catalog :link-type: ref :link-alt: Ecosystem Catalog :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`category;3.5em` +++ Discover integrations and solutions from the open-source community and CrateDB partners. ::: :::{grid-item-card} Integration Tutorials I :link: integrate :link-type: ref :link-alt: Integration Tutorials I :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`integration_instructions;3.5em` +++ Learn about the variety of options to connect and integrate with 3rd-party applications. ::: :::{grid-item-card} Integration Tutorials II :link: https://community.cratedb.com/t/overview-of-cratedb-integration-tutorials/1015 :link-alt: Integration Tutorials II :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`local_library;3.5em` +++ Integration-focused tutorials to help you use CrateDB together with other tools and libraries. ::: :::: ## Examples Learn how to use CrateDB by digesting concise examples. ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} CrateDB Examples :link: https://github.com/crate/cratedb-examples :link-alt: CrateDB Examples :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`play_circle;3.5em` +++ A collection of clear and concise examples how to work with CrateDB. ::: :::{grid-item-card} Sample Apps :link: https://github.com/crate/crate-sample-apps/ :link-alt: CrateDB Sample Apps :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`apps;3.5em` +++ Different client libraries used by canonical guestbook demo web applications. ::: :::: [CrateDB customer stories]: https://www.youtube.com/playlist?list=PLDZqzXOGoWUJrAF_lVx9U6BzAGG9xYz_v [HTTP protocol]: https://en.wikipedia.org/wiki/HTTP [Integrations]: #integrate [JDBC]: https://en.wikipedia.org/wiki/Java_Database_Connectivity [ODBC]: https://en.wikipedia.org/wiki/Open_Database_Connectivity [PostgreSQL wire protocol]: https://www.postgresql.org/docs/current/protocol.html [sharing a playlist of videos]: https://www.youtube.com/playlist?list=PL3cZtICBssphXl5rHgsgG9vTNAVTw_Veq.. _index: ================= CrateDB Reference ================= CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of data in real-time. .. NOTE:: This resource assumes you know the basics. If not, check out the `Tutorials`_ section for beginner material. .. SEEALSO:: CrateDB is an open source project and is `hosted on GitHub`_. .. rubric:: Table of contents .. toctree:: :maxdepth: 2 concepts/index cli-tools config/index general/index admin/index sql/index interfaces/index appendices/index .. _Tutorials: https://crate.io/docs/crate/tutorials/en/latest/ .. _hosted on GitHub: https://github.com/crate/crate.. _concept-clustering: ========== Clustering ========== The aim of this document is to describe, on a high level, how the distributed SQL database CrateDB uses a shared nothing architecture to form high- availability, resilient database clusters with minimal effort of configuration. It will lay out the core concepts of the shared nothing architecture at the heart of CrateDB. The main difference to a `primary-secondary architecture`_ is that every node in the CrateDB cluster can perform every operation - hence all nodes are equal in terms of functionality (see :ref:`concept-node-components`) and are configured the same. .. rubric:: Table of contents .. contents:: :local: .. _concept-node-components: Components of a CrateDB Node ============================ To understand how a CrateDB cluster works it makes sense to first take a look at the components of an individual node of the cluster. .. _figure_1: .. figure:: interconnected-crate-nodes.png :align: center Figure 1 Multiple interconnected instances of CrateDB form a single database cluster. The components of each node are equal. :ref:`figure_1` shows that in CrateDB each node of a cluster contains the same components that (a) interface with each other, (b) with the same component from a different node and/or (c) with the outside world. These four major components are: SQL Handler, Job Execution Service, Cluster State Service, and Data Storage. SQL Handler ----------- The SQL Handler part of a node is responsible for three aspects: (a) handling incoming client requests, (b) parsing and analyzing the SQL statement from the request and (c) creating an execution plan based on the analyzed statement (`abstract syntax tree`_) The SQL Handler is the only of the four components that interfaces with the "outside world". CrateDB supports three protocols to handle client requests: (a) HTTP (b) a Binary Transport Protocol (c) the PostgreSQL Wire Protocol A typical request contains a SQL statement and its corresponding arguments. Job Execution Service --------------------- The Job Execution Service is responsible for the execution of a plan ("job"). The phases of the job and the resulting operations are already defined in the execution plan. A job usually consists of multiple operations that are distributed via the Transport Protocol to the involved nodes, be it the local node and/or one or multiple remote nodes. Jobs maintain IDs of their individual operations. This allows CrateDB to "track" (or for example "kill") distributed queries. Cluster State Service --------------------- The three main functions of the Cluster State Service are: (a) cluster state management, (b) election of the master node and (c) node discovery, thus being the main component for cluster building (as described in section :ref:`concept-clusters`). It communicates using the Binary Transport Protocol. Data storage ------------ The data storage component handles operations to store and retrieve data from disk based on the execution plan. In CrateDB, the data stored in the tables is sharded, meaning that tables are divided and (usually) stored across multiple nodes. Each shard is a separate Lucene index that is stored physically on the filesystem. Reads and writes are operating on a shard level. .. _concept-clusters: Multi-node setup: Clusters ========================== A CrateDB cluster is a set of two or more CrateDB instances (referred to as *nodes*) running on different hosts which form a single, distributed database. For inter-node communication, CrateDB uses a software specific transport protocol that utilizes byte-serialized Plain Old Java Objects (`POJOs`_) and operates on a separate port. That so-called "transport port" must be open and reachable from all nodes in the cluster. Cluster state management ------------------------ The cluster state is versioned and all nodes in a cluster keep a copy of the latest cluster state. However, only a single node in the cluster -- the *master node* -- is allowed to change the state at runtime. Settings, metadata, and routing ................................ The cluster state contains all necessary meta information to maintain the cluster and coordinate operations: * Global cluster settings * Discovered nodes and their status * Schemas of tables * The status and location of primary and replica shards When the master node updates the cluster state it will publish the new state to all nodes in the cluster and wait for all nodes to respond before processing the next update. .. _concept-master-election: Master Node Election -------------------- In a CrateDB cluster there can only be one master node at any single time. The cluster only becomes available to serve requests once a master has been elected, and a new election takes place if the current master node becomes unavailable. By default, all nodes are master-eligible, but :ref:`a node setting ` is available to indicate, if desired, that a node must not take on the role of master. To elect a master among the eligible nodes, a majority (``floor(half)+1``), also known as *quorum*, is required among a subset of all master-eligible nodes, this subset of nodes is known as the *voting configuration*. The *voting configuration* is a list which is persisted as part of the cluster state. It is maintained automatically in a way that makes so that split-brain scenarios are never possible. Every time a node joins the cluster, or leaves the cluster, even if it is for a few seconds, CrateDB re-evaluates the voting configuration. If the new number of master-eligible nodes in the cluster is odd, CrateDB will put them all in the voting configuration. If the number is even, CrateDB will exclude one of the master-eligible nodes from the voting configuration. The voting configuration is not shrunk below 3 nodes, meaning that if there were 3 nodes in the voting configuration and one of them becomes unavailable, they all stay in the voting configuration and a quorum of 2 nodes is still required. A master node rescinds its role if it cannot contact a quorum of nodes from the latest voting configuration. .. WARNING:: If you do infrastructure maintenance, please note that as nodes are shutdown or rebooted, they will temporarily leave the voting configuration, and for the cluster to elect a master a quorum is required among the nodes that were last in the voting configuration. For instance, if you have a 5-nodes cluster, with all nodes master-eligible, and node 1 is currently the master, and you shutdown node 5, then node 4, then node 3, the cluster will stay available as the voting configuration will have adapted to only have nodes 1, 2, and 3 on it. If you then shutdown one more node the cluster will become unavailable as a quorum of 2 nodes is now required and not available. To bring the cluster back online at this point you will require two nodes among 1, 2, and 3. Bringing back nodes 3, 4, and 5, will not be sufficient. .. NOTE:: Special `settings and considerations `_ applied prior to CrateDB version 4.0.0. .. _concept-discovery: Discovery --------- The process of finding, adding and removing nodes is done in the discovery module. .. _figure_2: .. figure:: discovery-process.png :align: center Figure 2 Phases of the node discovery process. n1 and n2 already form a cluster where n1 is the elected master node, n3 joins the cluster. The cluster state update happens in parallel! Node discovery happens in multiple steps: * CrateDB requires a list of potential host addresses for other CrateDB nodes when it is starting up. That list can either be provided by a static configuration or can be dynamically generated, for example by fetching DNS SRV records, querying the Amazon EC2 API, and so on. * All potential host addresses are pinged. Nodes which receive the request respond to it with information about the cluster it belongs to, the current master node, and its own node name. * Now that the node knows the master node, it sends a join request. The Primary verifies the incoming request and adds the new node to the cluster state that now contains the complete list of all nodes in the cluster. * The cluster state is then published across the cluster. This guarantees the common knowledge of the node addition. .. CAUTION:: If a node is started without any :ref:`initial_master_nodes ` or a :ref:`discovery_type ` set to ``single-node`` (e.g., the default configuration), it will never join a cluster even if the configuration is subsequently changed. It is possible to force the node to forget its current cluster state by using the :ref:`cli-crate-node` CLI tool. However, be aware that this may result in data loss. Networking ---------- In a CrateDB cluster all nodes have a direct link to all other nodes; this is known as `full mesh`_ topology. Due to simplicity reasons every node maintains a one-way connections to every other node in the network. The network topology of a 5 node cluster looks like this: .. _figure_3: .. figure:: mesh-network-topology.png :align: center :width: 50% Figure 3 Network topology of a 5 node CrateDB cluster. Each line represents a one-way connection. The advantages of a fully connected network are that it provides a high degree of reliability and the paths between nodes are the shortest possible. However, there are limitations in the size of such networked applications because the number of connections (c) grows quadratically with the number of nodes (n): .. code-block:: mathematica c = n * (n - 1) Cluster behavior ================ The fact that each CrateDB node in a cluster is equal allows applications and users to connect to any node and get the same response for the same operations. As already described in section :ref:`concept-node-components`, the SQL handler is responsible for handling incoming client SQL requests, either using the HTTP transport protocol, or the PostgreSQL wire protocol. The "handler node" that accepts the client request also returns the response to the client. It does neither redirect nor delegate the request to a different nodes. The handler node parses the incoming request into a syntax tree, analyzes it and creates an execution plan locally. Then the operations of the plan are executed in a distributed manner. The upstream of the final phase of the execution is always the handler which then returns the response to the client. Application use case ==================== In a conventional setup of an application using a primary-secondary database the deployed stack looks similar to this: .. _figure_4: .. figure:: conventional-deployment.png :align: center Figure 4 Conventional deployment of an application-database stack. However, this given setup does not scale because all application servers use the same, single entry point to the database for writes (the application can still read from secondaries) and if that entry point is unavailable the complete stack is broken. Choosing a shared nothing architecture allows DevOps to deploy their applications in an "elastic" manner without SPoF. The idea is to extend the shared nothing architecture from the database to the application which in most cases is stateless already. .. _figure_5: .. figure:: shared-nothing-deployment.png :align: center Figure 5 Elastic deployment making use of the shared nothing architecture. If you deploy an instance of CrateDB together with every application server you will be able to dynamically scale up and down your database backend depending on your needs. The application only needs to communicate to its "bound" CrateDB instance on localhost. The load balancer tracks the health of the hosts and if either the application or the database on a single host fails the complete host will taken out of the load balancing. .. _primary-secondary architecture: https://en.wikipedia.org/wiki/Master/slave_(technology) .. _abstract syntax tree: https://en.wikipedia.org/wiki/Abstract_syntax_tree .. _POJOs: https://en.wikipedia.org/wiki/Plain_Old_Java_Object .. _full mesh: https://en.wikipedia.org/wiki/Network_topology#Mesh .. _split-brain: https://en.wikipedia.org/wiki/Split-brain_(computing).. _concept-joins: ===== Joins ===== :ref:`Joins ` are essential operations in relational databases. They create a link between rows based on common values and allow the meaningful combination of these rows. CrateDB supports joins and due to its distributed nature allows you to work with large amounts of data. In this document we will present the following topics. First, an overview of the existing types of joins and algorithms provided. Then a description of how CrateDB implements them along with the necessary optimizations, which allows us to work with huge datasets. .. rubric:: Table of contents .. contents:: :local: .. _join-types: Join types ========== A join is a relational operation that merges two data sets based on certain properties. :ref:`joins_figure_1` shows which elements appear in which join. .. _joins_figure_1: .. figure:: joins.png :align: center Join Types From left to right, top to bottom: left join, right join, inner join, outer join, and cross join of a set L and R. .. _join-types-cross: Cross join ---------- A :ref:`cross join ` returns the Cartesian product of two or more relations. The result of the Cartesian product on the relation *L* and *R* consists of all possible permutations of each tuple of the relation *L* with every tuple of the relation *R*. .. _join-types-inner: Inner join ---------- An :ref:`inner join ` is a join of two or more relations that returns only tuples that satisfy the join condition. .. _join-types-equi: Equi Join ......... An *equi join* is a subset of an inner join and a comparison-based join, that uses equality comparisons in the join condition. The equi join of the relation *L* and *R* combines tuple *l* of relation *L* with a tuple *r* of the relation *R* if the join attributes of both tuples are identical. .. _join-types-outer: Outer join ---------- An :ref:`outer join ` returns a relation consisting of tuples that satisfy the join condition and dangling tuples from both or one of the relations, respectively to the outer join type. An outer join can be one of the following types: - **Left** outer join returns tuples of the relation *L* matching tuples of the relation *R* and dangling tuples of the relation *R* padded with null values. - **Right** outer join returns tuples of the relation *R* matching tuples of the relation *L* and dangling tuples from the relation *L* padded with null values. - **Full** outer join returns matching tuples of both relations and dangling tuples produced by left and right outer joins. .. _join-algos: Join algorithms =============== CrateDB supports (a) CROSS JOIN, (b) INNER JOIN, (c) EQUI JOIN, (d) LEFT JOIN, (e) RIGHT JOIN and (f) FULL JOIN. All of these join types are executed using the :ref:`nested loop join algorithm ` except for the :ref:`Equi Joins ` which are executed using the :ref:`hash join algorithm `. Special optimizations, according to the specific use cases, are applied to improve execution performance. .. _join-algos-nested-loop: Nested loop join ---------------- The **nested loop** join is the simplest join algorithm. One of the relations is nominated as the inner relation and the other as the outer relation. Each tuple of the outer relation is compared with each tuple of the inner relation and if the join condition is satisfied, the tuples of the relation *L* and *R* are concatenated and added into the returned virtual relation:: for each tuple l ∈ L do for each tuple r ∈ R do if l.a Θ r.b put tuple(l, r) in Q *Listing 1. Nested loop join algorithm.* .. _join-algos-nested-loop-prim: Primitive nested loop ..................... For joins on some relations, the nested loop operation can be executed directly on the handler node. Specifically for queries involving a CROSS JOIN or joins on `system tables`_ /`information_schema`_ each shard sends the data to the handler node. Afterwards, this node runs the nested loop, applies limits, etc. and ultimately returns the results. Similarly, joins can be nested, so instead of collecting data from shards the rows can be the result of a previous join or :ref:`table function `. .. _join-algos-nested-loop-dist: Distributed nested loop ....................... Relations are usually distributed to different nodes which require the nested loop to acquire the data before being able to join. After finding the locations of the required shards (which is done in the planning stage), the smaller data set (based on the row count) is broadcast amongst all the nodes holding the shards they are joined with. After that, each of the receiving nodes can start running a nested loop on the subset it has just received. Finally, these intermediate results are pushed to the original (handler) node to merge and return the results to the requesting client (see :ref:`joins_figure_2`). .. _joins_figure_2: .. figure:: nested-loop.png :align: center Nodes that are holding the smaller shards broadcast the data to the processing nodes which then return the results to the requesting node. Queries can be optimized if they contain (a) ORDER BY, (b) LIMIT, or (c) if INNER/EQUI JOIN. In any of these cases, the nested loop can be terminated earlier: - Ordering allows determining whether there are records left - Limit states the maximum number of rows that are returned Consequently, the number of rows is significantly reduced allowing the operation to complete much faster. .. _join-algos-hash: Hash join --------- The Hash Join algorithm is used to execute certain types of joins in a more efficient way than :ref:`Nested Loop `. .. _join-algos-hash-basic: Basic algorithm ............... The operation takes place in one node (the handler node to which the client is connected). The rows of the left relation of the join are read and a hashing algorithm is applied on the fields of the relation which participate in the join condition. The hashing algorithm generates a hash value which is used to store every row of the left relation in the proper position in a `hash table`_. Then the rows of the right relation are read one-by-one and the same hashing algorithm is applied on the fields that participate in the join condition. The generated hash value is used to make a lookup in the `hash table`_. If no entry is found, the row is skipped and the processing continues with the next row from the right relation. If an entry is found, the join condition is validated (handling hash collisions) and on successful validation the combined tuple of left and right relation is returned. .. _joins_figure_3: .. figure:: hash-join.png :align: center Basic hash join algorithm .. _join-algos-hash-block: Block hash join ............... The Hash Join algorithm requires a `hash table`_ containing all the rows of the left relation to be stored in memory. Therefore, depending on the size of the relation (number of rows) and the size of each row, the size of this hash table might exceed the available memory of the node executing the hash join. To resolve this limitation the rows of the left relation are loaded into the hash table in blocks. On every iteration the maximum available size of the `hash table`_ is calculated, based on the number of rows and size of each row of the table but also taking into account the available memory for query execution on the node. Once this block-size is calculated the rows of the left relation are processed and inserted into the `hash table`_ until the block-size is reached. The operation then starts reading the rows of the right relation, process them one-by-one and performs the lookup and the join condition validation. Once all rows from the right relation are processed the `hash table`_ is re-initialized based on a new calculation of the block size and a new iteration starts until all rows of the left relation are processed. With this algorithm the memory limitation is handled in expense of having to iterate over the rows of the right table multiple times, and it is the default algorithm used for Hash Join execution by CrateDB. .. _join-algos-hash-block-switch: Switch tables optimization '''''''''''''''''''''''''' Since the right table can be processed multiple times (number of rows from left / block-size) the right table should be the smaller (in number of rows) of the two relations participating in the join. Therefore, if originally the right relation is larger than the left the query planner performs a switch to take advantage of this detail and execute the hash join with better performance. .. _join-algos-hash-dist: Distributed block hash join ........................... Since CrateDB is a distributed database and a standard deployment consists of at least three nodes and in most case of much more, the Hash Join algorithm execution can be further optimized (performance-wise) by executing it in a distributed manner across the CrateDB cluster. The idea is to have the hash join operation executing in multiple nodes of the cluster in parallel and then merge the intermediate results before returning them to the client. A hashing algorithm is applied on every row of both the left and right relations. On the integer value generated by this hash, a modulo, by the number of nodes in the cluster, is applied and the resulting number defines the node to which this row should be sent. As a result each node of the cluster receives a subset of the whole data set which is ensured (by the hashing and modulo) to contain all candidate matching rows. Each node in turn performs a :ref:`block hash join ` on this subset and sends its result tuples to the handler node (where the client issued the query). Finally, the handler node receives those intermediate results, merges them and applies any pending ``ORDER BY``, ``LIMIT`` and ``OFFSET`` and sends the final result to the client. This algorithm is used by CrateDB for most cases of hash join execution except for joins on complex subqueries that contain ``LIMIT`` and/or ``OFFSET``. .. _joins_figure_4: .. figure:: distributed-hash-join.png :align: center Distributed hash join algorithm .. _join-optim: Join optimizations ================== .. _join-optim-optim-query-fetch: Query then fetch ---------------- Join operations on large relation can be extremely slow especially if the join is executed with a :ref:`Nested Loop `. - which means that the runtime complexity grows quadratically (O(n*m)). Specifically for :ref:`cross joins ` this results in large amounts of data sent over the network and loaded into memory at the handler node. CrateDB reduces the volume of data transferred by employing "Query Then Fetch": First, filtering and ordering are applied (if possible where the data is located) to obtain the required document IDs. Next, as soon as the final data set is ready, CrateDB fetches the selected fields and returns the data to the client. .. _join-optim-optim-push-down: Push-down query optimization ---------------------------- Complex queries such as Listing 2 require the planner to decide when to filter, sort, and merge in order to efficiently execute the plan. In this case, the query would be split internally into subqueries before running the join. As shown in :ref:`joins_figure_5`, first filtering (and ordering) is applied to relations *L* and *R* on their shards, then the result is directly broadcast to the nodes running the join. Not only will this behavior reduce the number of rows to work with, it also distributes the workload among the nodes so that the (expensive) join operation can run faster. .. code-block:: SQL SELECT L.a, R.x FROM L, R WHERE L.id = R.id AND L.b > 100 AND R.y < 10 ORDER BY L.a *Listing 2. An INNER JOIN on ids (effectively an EQUI JOIN) which can be optimized.* .. _joins_figure_5: .. figure:: push-down.png :align: center Figure 5 Complex queries are broken down into subqueries that are run on their shards before joining. .. _join-optim-cross-join-elimination: Cross join elimination ---------------------- The optimizer will try to eliminate cross joins in the query plan by changing the join-order. Cross join elimination replaces a CROSS JOIN with an INNER JOIN if query conditions used in the WHERE clause or other join conditions allow for it. An example: .. code-block:: SQL SELECT * FROM t1 CROSS JOIN t2 INNER JOIN t3 ON t3.z = t1.x AND t3.z = t2.y The cross join elimination will change the order of the query from t1, t2, t3 to t2, t1, t3 so that each join has a join condition and the CROSS JOIN can be replaced by an INNER JOIN. When reordering, it will try to preserve the original join order as much as possible. If a CROSS JOIN cannot be eliminated, the original join order will be maintained. This optimizer rule can be disabled with the :ref:`optimizer eliminate cross join session setting `:: SET optimizer_eliminate_cross_join = false Note that this setting is experimental, and may change in the future. .. _hash table: https://en.wikipedia.org/wiki/Hash_table .. _here: http://www.dcs.ed.ac.uk/home/tz/phd/thesis.pdf .. _information_schema: https://crate.io/docs/reference/sql/information_schema.html .. _system tables: https://crate.io/docs/reference/sql/system.html.. _concept-storage-consistency: ======================= Storage and consistency ======================= This document provides an overview on how CrateDB stores and distributes state across the cluster and what consistency and durability guarantees are provided. .. NOTE:: Since CrateDB heavily relies on Elasticsearch_ and Lucene_ for storage and cluster consensus, concepts shown here might look familiar to Elasticsearch_ users, since the implementation is actually reused from the Elasticsearch_ code. .. rubric:: Table of contents .. contents:: :local: .. _concept-data-storage: Data storage ============ Every table in CrateDB is sharded, which means that tables are divided and distributed across the nodes of a cluster. Each shard in CrateDB is a Lucene_ index broken down into segments getting stored on the filesystem. Physically the files reside under one of the configured data directories of a node. Lucene only appends data to segment files, which means that data written to the disc will never be mutated. This makes it easy for replication and :ref:`recovery `, since syncing a shard is simply a matter of fetching data from a specific marker. An arbitrary number of replica shards can be configured per table. Every operational replica holds a full synchronized copy of the primary shard. With read operations, there is no difference between executing the operation on the primary shard or on any of the replicas. CrateDB randomly assigns a shard when routing an operation. It is possible to configure this behavior if required, see our best practice guide on `multi zone setups `_ for more details. Write operations are handled differently than reads. Such operations are synchronous over all active replicas with the following flow: 1. The primary shard and the active replicas are looked up in the cluster state for the given operation. The primary shard and a quorum of the configured replicas need to be available for this step to succeed. 2. The operation is routed to the according primary shard for execution. 3. The operation gets executed on the primary shard 4. If the operation succeeds on the primary, the operation gets executed on all replicas in parallel. 5. After all replica operations finish the operation result gets returned to the caller. Should any replica shard fail to write the data or times out in step 5, it's immediately considered as unavailable. .. _concept-atomicity: Atomicity at document level =========================== Each row of a table in CrateDB is a semi structured document which can be nested arbitrarily deep through the use of object and array types. Operations on documents are atomic. Meaning that a write operation on a document either succeeds as a whole or has no effect at all. This is always the case, regardless of the nesting depth or size of the document. CrateDB does not provide transactions. Since every document in CrateDB has a version number assigned, which gets increased every time a change occurs, patterns like `Optimistic Concurrency Control`_ can help to work around that limitation. .. _concept-durability: Durability ========== Each shard has a WAL_ also known as translog. It guarantees that operations on documents are persisted to disk without having to issue a Lucene-Commit for every write operation. When the translog gets flushed all data is written to the persistent index storage of Lucene and the translog gets cleared. In case of an unclean shutdown of a shard, the transactions in the translog are getting replayed upon startup to ensure that all executed operations are permanent. The translog is also directly transferred when a newly allocated replica initializes itself from the primary shard. There is no need to flush segments to disc just for replica :ref:`recovery ` purposes. .. _concept-addressing-documents: Addressing documents ==================== Every document has an :ref:`internal identifier `. By default this identifier is derived from the primary key. Documents living in tables without a primary key are assigned a unique auto-generated ID automatically when created. Each document is :ref:`routed ` to one specific shard according to the :ref:`routing column `. All rows that have the same routing column row value are stored in the same shard. The routing column can be specified with the :ref:`CLUSTERED ` clause when creating the table. If a :ref:`primary key ` has been defined, it will be used as the default routing column, otherwise the :ref:`internal document ID ` is used. While transparent to the user, internally there are two ways how CrateDB accesses documents: :get: Direct access by identifier. Only applicable if the routing key and the identifier can be computed from the given query specification. (e.g: the full primary key is defined in the where clause). This is the most efficient way to access a document, since only a single shard gets accessed and only a simple index lookup on the ``_id`` field has to be done. :search: Query by matching against fields of documents across all candidate shards of the table. .. _concept-consistency: Consistency =========== CrateDB is eventual consistent for search operations. Search operations are performed on shared ``IndexReaders`` which besides other functionality, provide caching and reverse lookup capabilities for shards. An ``IndexReader`` is always bound to the Lucene_ segment it was started from, which means it has to be refreshed in order to see new changes, this is done on a time based manner, but can also be done manually (see `refresh`_). Therefore a search only sees a change if the according ``IndexReader`` was refreshed after that change occurred. If a query specification results in a ``get`` operation, changes are visible immediately. This is achieved by looking up the document in the translog first, which will always have the most recent version of the document. The common update and fetch use-case is therefore possible. If a client updates a row and that row is looked up by its primary key after that update the changes will always be visible, since the information will be retrieved directly from the translog. There is an exception to that, when the ``WHERE`` clause contains complex filtering and/or lots of Primary Key values. You can find more details :ref:`here `. .. NOTE:: ``Dirty reads`` can occur if the primary shard becomes isolated. The primary will only realize it is isolated once it tries to communicate with its replicas or the master. At that point, a write operation is already committed into the primary and can be read by a concurrent read operation. In order to minimise the window of opportunity for this phenomena, the CrateDB nodes communicate with the master every second (by default) and once they realise no master is known, they will start rejecting write operations. Every replica shard is updated synchronously with its primary and always carries the same information. Therefore it does not matter if the primary or a replica shard is accessed in terms of consistency. Only the refresh of the ``IndexReader`` affects consistency. .. NOTE:: Due to internal constraints, when the ``WHERE`` clause filters on multiple columns of a ``PRIMARY KEY``, but one or more of those columns is tested against lots of values, the query might be executed using a ``Collect`` operator instead of a ``Get``, thus records might be unavailable until a ``REFRESH`` is run. The same situation could occur when the ``WHERE`` clause contains long complex expressions, e.g.:: SELECT * FROM t WHERE pk1 IN () AND pk2 = 3 AND pk3 = 'foo' SELECT * FROM t WHERE pk1 = ? AND pk2 = ? AND pk3 = ? OR pk1 = ? AND pk2 = ? AND pk3 = ? OR pk1 = ? ... .. CAUTION:: Some outage conditions can affect these consistency claims. See the :ref:`resiliency documentation ` for details. .. _concept-cluster-metadata: Cluster meta data ================= Cluster meta data is held in the so called "Cluster State", which contains the following information: - Tables schemas. - Primary and replica shard locations. Basically just a mapping from shard number to the storage node. - Status of each shard, which tells if a shard is currently ready for use or has any other state like "initializing", "recovering" or cannot be assigned at all. - Information about discovered nodes and their status. - Configuration information. Every node has its own copy of the cluster state. However there is only one node allowed to change the cluster state at runtime. This node is called the "master" node and gets auto-elected. The "master" node has no special configuration at all, all nodes are master-eligible by default, and any master-eligible node can be elected as the master. There is also an automatic re-election if the current master node goes down for some reason. .. NOTE:: To avoid a scenario where two masters could be elected due to network partitioning, CrateDB automatically defines a quorum of nodes with which it is possible to elect a master. For details on how this works and further information see :ref:`concept-master-election`. To explain the flow of events for any cluster state change, here is an example flow for an ``ALTER TABLE`` statement which changes the schema of a table: #. A node in the cluster receives the ``ALTER TABLE`` request. #. The node sends out a request to the current master node to change the table definition. #. The master node applies the changes locally to the cluster state and sends out a notification to all affected nodes about the change. #. The nodes apply the change, so that they are now in sync with the master. #. Every node might take some local action depending on the type of cluster state change. .. _Elasticsearch: https://www.elasticsearch.org/ .. _Lucene: https://lucene.apache.org/core/ .. _WAL: https://en.wikipedia.org/wiki/Write-ahead_logging .. _Optimistic Concurrency Control: https://crate.io/docs/crate/reference/sql/occ.html .. _refresh: https://crate.io/docs/crate/reference/sql/refresh.html.. _concept-resiliency: ========== Resiliency ========== Distributed systems are tricky. All sorts of things can go wrong that are beyond your control. The network can go away, disks can fail, hosts can be terminated unexpectedly. CrateDB tries very hard to cope with these sorts of issues while maintaining :ref:`availability `, :ref:`consistency `, and :ref:`durability `. However, as with any distributed system, sometimes, *rarely*, things can go wrong. Thankfully, for most use-cases, if you follow best practices, you are extremely unlikely to experience resiliency issues with CrateDB. .. SEEALSO:: :ref:`Appendix: Resiliency Issues ` .. rubric:: Table of contents .. contents:: :local: .. _concept-resiliency-monitoring: Monitoring cluster status ========================= .. figure:: resilience-status.png :alt: The Admin UI in CrateDB has a status indicator which can be used to determine the stability and health of a cluster. A green status indicates that all shards have been replicated, are available, and are not being relocated. This is the lowest risk status for a cluster. The status will turn yellow when there is an elevated risk of encountering issues, due to a network failure or the failure of a node in the cluster. The status is updated every few seconds (variable on your cluster `ping configuration `_). .. _concept-resiliency-consistency: Storage and consistency ======================= Code that expects the behavior of an `ACID `_ compliant database like MySQL may not always work as expected with CrateDB. CrateDB does not support ACID transactions, but instead has :ref:`atomic operations ` and :ref:`eventual consistency ` at the row level. See also :ref:`concept-clustering`. Eventual consistency is the trade-off that CrateDB makes in exchange for high-availability that can tolerate most hardware and network failures. So you may observe data from different cluster nodes temporarily falling very briefly out-of-sync with each other, although over time they will become consistent. For example, you know a row has been written as soon as you get the ``INSERT OK`` message. But that row might not be read back by a subsequent ``SELECT`` on a different node until after a :ref:`table refresh ` (which typically occurs within one second). Your applications should be designed to work this storage and consistency model. .. _concept-resiliency-deployment: Deployment strategies ===================== When deploying CrateDB you should carefully weigh your need for high-availability and disaster recovery against operational complexity and expense. Which strategy you pick is going to depend on the specifics of your situation. Here are some considerations: - CrateDB is designed to scale horizontally. Make sure that your machines are fit for purpose, i.e. use SSDs, increase RAM up to 64 GB, and use multiple CPU cores when you can. But if you want to dynamically increase (or decrease) the capacity of your cluster, `add (or remove) nodes `_. - If availability is a concern, you can add `nodes across multiple zones `_ (e.g. different data centers or geographical regions). The more available your CrateDB cluster is, the more likely it is to withstand external failures like a zone going down. - If data durability or read performance is a concern, you can increase the number of :ref:`table replicas `. More table replicas means a smaller chance of permanent data loss due to hardware failures, in exchange for the use of more disk space and more intra-cluster network traffic. - If disaster recovery is important, you can :ref:`take regular snapshots ` and store those snapshots in cold storage. This safeguards data that has already been successfully written and replicated across the cluster. - CrateDB works well as part of a `data pipeline `_, especially if you’re working with high-volume data. If you have a message queue in front of CrateDB, you can configure it with backups and replay the data flow for a specific timeframe. This can be used to recover from issues that affect your data before it has been successfully written and replicated across the cluster. Indeed, this is the generally recommended way to recover from any of the rare consistency or data-loss issues you might encounter when CrateDB experiences network or hardware failures (see next section)... highlight:: psql .. _partitioned-tables: ================== Partitioned tables ================== .. rubric:: Table of contents .. contents:: :local: .. _partitioned-intro: Introduction ============ A partitioned table is a virtual table consisting of zero or more partitions. A partition is similar to a regular single table and consists of one or more shards. :: partitioned_table | +-- partition 1 | | | +- shard 0 | | | +- shard 1 | +-- partition 2 | +- shard 0 | +- shard 1 A table becomes a partitioned table by defining :ref:`partition columns `. When a record with a new distinct combination of values for the configured :ref:`partition columns ` is inserted, a new partition is created and the document will be inserted into this partition. A partitioned table can be queried like a regular table. Partitioned tables have the following advantages: - The number of shards can be changed on the partitioned table, which will then change how many shards will be used for the next partition creation. This enables one to start out with few shards per partition initially, and scale up the number of shards for later partitions once traffic and ingest rates increase with the lifetime of an application. - Partitions can be backed up and restored individually. - Queries which contain filters in the ``WHERE`` clause which identify a single partition or a subset of partitions is less expensive than querying all partitions because the shards of the excluded partitions won't have to be accessed. - Deleting data from a partitioned table is cheap if full partitions are dropped. Full partitions are dropped with ``DELETE`` statements where the optimizer can infer from the ``WHERE`` clause and partition columns that all records of a partition match without having to :ref:`evaluate ` against the records. Partitioned tables have the following disadvantages: - If the partition columns are badly chosen you can end up with too many shards in the cluster, affecting the overall stability and performance negatively. - You may end up with empty, stale partitions if delete operations couldn't be optimized to drop full partitions. You may have to watch out for this and invoke ``DELETE`` statements to target single partitions to clean them up. - Some optimizations don't apply to partitioned tables. An example for this is a GROUP BY query where the grouping keys match the ``CLUSTERED BY`` columns of a table. This kind of query can be optimized on regular tables, but cannot be optimized on a partitioned table. .. NOTE:: Keep in mind that the values of the partition columns are internally base32 encoded into the partition name (which is a separate table). So, for every partition, the partition table name includes: - The table schema (optional) - The table name - The base32 encoded partition column value(s) - An internal overhead of 14 bytes Altogether, the table name length must not exceed the :ref:`255 bytes length limitation `. .. CAUTION:: Every table partition is clustered into as many shards as you configure for the table. Because of this, a good partition configuration depends on good :ref:`shard allocation `. Well tuned shard allocation is vital. Read the `sharding guide`_ to make sure you're getting the best performance out of CrateDB. .. _partitioned-creation: Creation ======== It can be created using the :ref:`sql-create-table` statement using the :ref:`sql-create-table-partitioned-by`:: cr> CREATE TABLE parted_table ( ... id bigint, ... title text, ... content text, ... width double precision, ... day timestamp with time zone ... ) CLUSTERED BY (title) INTO 4 SHARDS PARTITIONED BY (day); CREATE OK, 1 row affected (... sec) This creates an empty partitioned table which is not yet backed by real partitions. Nonetheless it does behave like a *normal* table. When the value to partition by references one or more :ref:`sql-create-table-base-columns`, their values must be supplied upon :ref:`sql-insert` or :ref:`sql-copy-from`. Often these values are computed on client side. If this is not possible, a :ref:`generated column ` can be used to create a suitable partition value from the given values on database-side:: cr> CREATE TABLE computed_parted_table ( ... id bigint, ... data double precision, ... created_at timestamp with time zone, ... month timestamp with time zone GENERATED ALWAYS AS date_trunc('month', created_at) ... ) PARTITIONED BY (month); CREATE OK, 1 row affected (... sec) .. _partitioned-info-schema: Information schema ================== This table shows up in the ``information_schema.tables`` table, recognizable as partitioned table by a non null ``partitioned_by`` column (aliased as ``p_b`` here):: cr> SELECT table_schema as schema, ... table_name, ... number_of_shards as num_shards, ... number_of_replicas as num_reps, ... clustered_by as c_b, ... partitioned_by as p_b, ... blobs_path ... FROM information_schema.tables ... WHERE table_name='parted_table'; +--------+--------------+------------+----------+-------+---------+------------+ | schema | table_name | num_shards | num_reps | c_b | p_b | blobs_path | +--------+--------------+------------+----------+-------+---------+------------+ | doc | parted_table | 4 | 0-1 | title | ["day"] | NULL | +--------+--------------+------------+----------+-------+---------+------------+ SELECT 1 row in set (... sec) :: cr> SELECT table_schema as schema, table_name, column_name, data_type ... FROM information_schema.columns ... WHERE table_schema = 'doc' AND table_name = 'parted_table' ... ORDER BY table_schema, table_name, column_name; +--------+--------------+-------------+--------------------------+ | schema | table_name | column_name | data_type | +--------+--------------+-------------+--------------------------+ | doc | parted_table | content | text | | doc | parted_table | day | timestamp with time zone | | doc | parted_table | id | bigint | | doc | parted_table | title | text | | doc | parted_table | width | double precision | +--------+--------------+-------------+--------------------------+ SELECT 5 rows in set (... sec) And so on. You can get information about the partitions of a partitioned table by querying the ``information_schema.table_partitions`` table:: cr> SELECT count(*) as partition_count ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table'; +-----------------+ | partition_count | +-----------------+ | 0 | +-----------------+ SELECT 1 row in set (... sec) As this table is still empty, no partitions have been created. .. _partitioned-insert: Insert ====== :: cr> INSERT INTO parted_table (id, title, width, day) ... VALUES (1, 'Don''t Panic', 19.5, '2014-04-08'); INSERT OK, 1 row affected (... sec) :: cr> SELECT partition_ident, "values", number_of_shards ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table' ... ORDER BY partition_ident; +--------------------------+------------------------+------------------+ | partition_ident | values | number_of_shards | +--------------------------+------------------------+------------------+ | 04732cpp6osj2d9i60o30c1g | {"day": 1396915200000} | 4 | +--------------------------+------------------------+------------------+ SELECT 1 row in set (... sec) On subsequent inserts with the same :ref:`partition column ` values, no additional partition is created:: cr> INSERT INTO parted_table (id, title, width, day) ... VALUES (2, 'Time is an illusion, lunchtime doubly so', 0.7, '2014-04-08'); INSERT OK, 1 row affected (... sec) :: cr> REFRESH TABLE parted_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT partition_ident, "values", number_of_shards ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table' ... ORDER BY partition_ident; +--------------------------+------------------------+------------------+ | partition_ident | values | number_of_shards | +--------------------------+------------------------+------------------+ | 04732cpp6osj2d9i60o30c1g | {"day": 1396915200000} | 4 | +--------------------------+------------------------+------------------+ SELECT 1 row in set (... sec) .. _partitioned-update: Update ====== :ref:`Partition columns ` cannot be changed, because this would necessitate moving all affected documents. Such an operation would not be atomic and could lead to inconsistent state:: cr> UPDATE parted_table set content = 'now panic!', day = '2014-04-07' ... WHERE id = 1; ColumnValidationException[Validation failed for day: Updating a partitioned-by column is not supported] When using a :ref:`generated column ` as partition column, all the columns referenced in its :ref:`generation expression ` cannot be updated either:: cr> UPDATE computed_parted_table set created_at='1970-01-01' ... WHERE id = 1; ColumnValidationException[Validation failed for created_at: Updating a column which is referenced in a partitioned by generated column expression is not supported] :: cr> UPDATE parted_table set content = 'now panic!' ... WHERE id = 2; UPDATE OK, 1 row affected (... sec) :: cr> REFRESH TABLE parted_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * from parted_table WHERE id = 2; +----+------------------------------------------+------------+-------+---------------+ | id | title | content | width | day | +----+------------------------------------------+------------+-------+---------------+ | 2 | Time is an illusion, lunchtime doubly so | now panic! | 0.7 | 1396915200000 | +----+------------------------------------------+------------+-------+---------------+ SELECT 1 row in set (... sec) .. _partitioned-delete: Delete ====== Deleting with a ``WHERE`` clause matching all rows of a partition will drop the whole partition instead of deleting every matching document, which is a lot faster:: cr> delete from parted_table where day = 1396915200000; DELETE OK, -1 rows affected (... sec) :: cr> SELECT count(*) as partition_count ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table'; +-----------------+ | partition_count | +-----------------+ | 0 | +-----------------+ SELECT 1 row in set (... sec) .. _partitioned-querying: Querying ======== ``UPDATE``, ``DELETE`` and ``SELECT`` queries are all optimized to only affect as few partitions as possible based on the partitions referenced in the ``WHERE`` clause. The ``WHERE`` clause is analyzed for partition use by checking the ``WHERE`` conditions against the values of the :ref:`partition columns `. For example, the following query will only operate on the partition for ``day=1396915200000``: .. Hidden: insert some rows:: cr> INSERT INTO parted_table (id, title, content, width, day) VALUES ... (1, 'The incredible foo', 'foo is incredible', 12.9, '2015-11-16'), ... (2, 'The dark bar rises', 'na, na, na, na, na, na, na, na, barman!', 0.5, '1970-01-01'), ... (3, 'Kill baz', '*splatter*, *oommph*, *zip*', 13.5, '1970-01-01'), ... (4, 'Spice Pork And haM', 'want some roses?', -0.0, '1999-12-12'); INSERT OK, 4 rows affected (... sec) .. Hidden: refresh cr> REFRESH TABLE parted_table; REFRESH OK, 3 rows affected (... sec) :: cr> SELECT count(*) FROM parted_table ... WHERE day='1970-01-01' ... ORDER by 1; +----------+ | count(*) | +----------+ | 2 | +----------+ SELECT 1 row in set (... sec) Any combination of conditions that can be :ref:`evaluated ` to a partition before actually executing the query is supported:: cr> SELECT id, title FROM parted_table ... WHERE date_trunc('year', day) > '1970-01-01' ... OR extract(day_of_week from day) = 1 ... ORDER BY id DESC; +----+--------------------+ | id | title | +----+--------------------+ | 4 | Spice Pork And haM | | 1 | The incredible foo | +----+--------------------+ SELECT 2 rows in set (... sec) Internally the ``WHERE`` clause is evaluated against the existing partitions and their partition values. These partitions are then filtered to obtain the list of partitions that need to be accessed. .. Hidden: delete:: cr> DELETE FROM parted_table; DELETE OK, -1 rows affected (... sec) .. _partitioned-generated: Partitioning by generated columns --------------------------------- Querying on tables partitioned by generated columns is optimized to infer a minimum list of partitions from the :ref:`partition columns ` referenced in the ``WHERE`` clause: .. Hidden: insert some stuff:: cr> INSERT INTO computed_parted_table (id, data, created_at) VALUES ... (1, 42.0, '2015-11-16T14:27:00+01:00'), ... (2, 0.0, '2015-11-16T00:00:00Z'), ... (3, 23.0,'1970-01-01'); INSERT OK, 3 rows affected (... sec) .. Hidden: refresh:: cr> REFRESH TABLE computed_parted_table; REFRESH OK, 2 rows affected (... sec) :: cr> SELECT id, date_format('%Y-%m', month) as m FROM computed_parted_table ... WHERE created_at = '2015-11-16T13:27:00.000Z' ... ORDER BY id; +----+---------+ | id | m | +----+---------+ | 1 | 2015-11 | +----+---------+ SELECT 1 row in set (... sec) .. _partitioned-alter: Alter ===== Parameters of partitioned tables can be changed as usual (see :ref:`sql_ddl_alter_table` for more information on how to alter regular tables) with the :ref:`sql-alter-table` statement. Common ``ALTER TABLE`` parameters affect both existing partitions and partitions that will be created in the future. :: cr> ALTER TABLE parted_table SET (number_of_replicas = '0-all') ALTER OK, -1 rows affected (... sec) Altering schema information (such as the column policy or adding columns) can only be done on the table (not on single partitions) and will take effect on both existing and new partitions of the table. :: cr> ALTER TABLE parted_table ADD COLUMN new_col text ALTER OK, -1 rows affected (... sec) .. _partitioned-alter-shards: Changing the number of shards ----------------------------- It is possible at any time to change the number of shards of a partitioned table. :: cr> ALTER TABLE parted_table SET (number_of_shards = 10) ALTER OK, -1 rows affected (... sec) .. NOTE:: This will **not** change the number of shards of existing partitions, but the new number of shards will be taken into account when **new** partitions are created. :: cr> INSERT INTO parted_table (id, title, width, day) ... VALUES (2, 'All Good', 3.1415, '2014-04-08'); INSERT OK, 1 row affected (... sec) .. Hidden: refresh table:: cr> REFRESH TABLE parted_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT count(*) as num_shards, sum(num_docs) as num_docs ... FROM sys.shards ... WHERE schema_name = 'doc' AND table_name = 'parted_table'; +------------+----------+ | num_shards | num_docs | +------------+----------+ | 10 | 1 | +------------+----------+ SELECT 1 row in set (... sec) :: cr> SELECT partition_ident, "values", number_of_shards ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table' ... ORDER BY partition_ident; +--------------------------+------------------------+------------------+ | partition_ident | values | number_of_shards | +--------------------------+------------------------+------------------+ | 04732cpp6osj2d9i60o30c1g | {"day": 1396915200000} | 10 | +--------------------------+------------------------+------------------+ SELECT 1 row in set (... sec) .. _partitioned-alter-single: Altering a single partition ........................... We also provide the option to change the number of shards that are already :ref:`allocated ` for an existing partition. This option operates on a partition basis, thus a specific partition needs to be specified:: cr> ALTER TABLE parted_table PARTITION (day=1396915200000) SET ("blocks.write" = true) ALTER OK, -1 rows affected (... sec) cr> ALTER TABLE parted_table PARTITION (day=1396915200000) SET (number_of_shards = 5) ALTER OK, 0 rows affected (... sec) cr> ALTER TABLE parted_table PARTITION (day=1396915200000) SET ("blocks.write" = false) ALTER OK, -1 rows affected (... sec) :: cr> SELECT partition_ident, "values", number_of_shards ... FROM information_schema.table_partitions ... WHERE table_schema = 'doc' AND table_name = 'parted_table' ... ORDER BY partition_ident; +--------------------------+------------------------+------------------+ | partition_ident | values | number_of_shards | +--------------------------+------------------------+------------------+ | 04732cpp6osj2d9i60o30c1g | {"day": 1396915200000} | 5 | +--------------------------+------------------------+------------------+ SELECT 1 row in set (... sec) .. NOTE:: The same prerequisites and restrictions as with normal tables apply. See :ref:`alter-shard-number`. .. _partitioned-alter-parameters: Alter table parameters ---------------------- It is also possible to alter parameters of single partitions of a partitioned table. However, unlike with partitioned tables, it is not possible to alter the schema information of single partitions. To change table parameters such as ``number_of_replicas`` or other table settings use the :ref:`sql-alter-table-partition`. :: cr> ALTER TABLE parted_table PARTITION (day=1396915200000) RESET (number_of_replicas) ALTER OK, -1 rows affected (... sec) .. _partitioned-alter-table: Alter table ``ONLY`` -------------------- Sometimes one wants to alter a partitioned table, but the changes should only affect new partitions and not existing ones. This can be done by using the ``ONLY`` keyword. :: cr> ALTER TABLE ONLY parted_table SET (number_of_replicas = 1); ALTER OK, -1 rows affected (... sec) .. _partitioned-alter-close-open: Closing and opening a partition ------------------------------- A single partition within a partitioned table can be opened and closed in the same way a normal table can. :: cr> ALTER TABLE parted_table PARTITION (day=1396915200000) CLOSE; ALTER OK, -1 rows affected (... sec) This will all operations beside ``ALTER TABLE ... OPEN`` to fail on this partition. The partition will also not be included in any query on the partitioned table. .. _partitioned-limitations: Limitations =========== * ``WHERE`` clauses cannot contain queries like ``partitioned_by_column='x' OR normal_column=x`` .. _partitioned-consistency: Consistency notes related to concurrent DML statement ===================================================== If a partition is deleted during an active insert or update bulk operation this partition won't be re-created. The number of affected rows will always reflect the real number of inserted/updated documents. .. Hidden: drop table:: cr> drop table parted_table; DROP OK, 1 row affected (... sec) .. Hidden: drop computed table:: cr> DROP TABLE computed_parted_table; DROP OK, 1 row affected (... sec) .. _sharding guide: https://crate.io/docs/crate/howtos/en/latest/performance/sharding.html.. _ddl-storage: ======= Storage ======= Data storage options can be tuned for each column similar to how indexing is defined. .. _ddl-storage-columnstore: Column store ============ Beside of storing the row data as-is (and indexing each value by default), each value term is stored into a `Column Store`_ by default. The usage of a `Column Store`_ is greatly improving global aggregations and groupings and enables ordering possibility as the data for one column is packed at one place. Using the `Column Store`_ limits the values of :ref:`type-text` columns to a maximal length of 32766 bytes. Turning off the `Column Store`_ in conjunction of :ref:`turning off indexing ` will remove the length limitation. Example: :: cr> CREATE TABLE t1 ( ... id INTEGER, ... url TEXT INDEX OFF STORAGE WITH (columnstore = false) ... ); CREATE OK, 1 row affected (... sec) Doing so will enable support for inserting strings longer than 32766 bytes into the ``url`` column, but the performance for global aggregations, groupings and sorting using this ``url`` column will decrease. .. NOTE:: ``INDEX OFF`` and therefore ``columnstore = false`` cannot be used with :ref:`partition columns `, as those are not stored as normal columns of a table. .. hide: cr> drop table t1; DROP OK, 1 row affected (... sec) Supported data types -------------------- Controlling if values are stored into a `Column Store`_ is only supported on following data types: - :ref:`type-text` - :ref:`data-types-numeric` - :ref:`type-timestamp` - :ref:`type-timestamp-with-tz` For all other :ref:`data-types-primitive` and :ref:`data-types-geo-point` it is enabled by default and cannot be disabled. :ref:`data-types-container` and :ref:`data-types-geo-shape` do not support storing values into a `Column Store`_ at all. .. _Column Store: https://en.wikipedia.org/wiki/Column-oriented_DBMS.. _ddl-replication: =========== Replication =========== You can configure CrateDB to *replicate* tables. When you configure replication, CrateDB will try to ensure that every table :ref:`shard ` has one or more copies available at all times. When there are multiple copies of the same shard, CrateDB will mark one as the *primary shard* and treat the rest as *replica shards*. Write operations always go to the primary shard, whereas read operations can go to any shard. CrateDB continually synchronizes data from the primary shard to all replica shards (through a process known as :ref:`shard recovery `). When a primary shard is lost (e.g., due to node failure), CrateDB will promote a replica shard to a primary. Hence, more table replicas mean a smaller chance of permanent data loss (through increased `data redundancy`) in exchange for more disk space utilization and intra-cluster network traffic. Replication can also improve read performance because any increase in the number of shards distributed across a cluster also increases the opportunities for CrateDB to `parallelize`_ query execution across multiple nodes. .. rubric:: Table of contents .. contents:: :local: .. _ddl-replication-config: Table configuration =================== You can configure the number of per-shard replicas :ref:`WITH ` the :ref:`sql-create-table-number-of-replicas` table setting. For example:: cr> CREATE TABLE my_table ( ... first_column integer, ... second_column text ... ) WITH (number_of_replicas = 0); CREATE OK, 1 row affected (... sec) As well as being able to configure a fixed number of replicas, you can configure a range of values by using a string to specify a minimum and a maximum (dependent on the number of nodes in the cluster). Here are some examples of replica ranges: ========= ===================================================================== Range Explanation ========= ===================================================================== ``0-1`` If you only have one node, CrateDB will not create any replicas. If you have more than one node, CrateDB will create one replica per shard. This range is the default value. --------- --------------------------------------------------------------------- ``2-4`` Each table will require at least two replicas for CrateDB to consider it fully replicated (i.e., a *green* replication :ref:`health status `). If the cluster has five nodes, CrateDB will create four replicas and allocate each one to a node that does not hold the corresponding primary. Suppose a cluster has four nodes or fewer. In that case, CrateDB will be unable to allocate every replica to a node that does not hold the corresponding primary, putting the table into :ref:`underreplication `. As a result, CrateDB will give the table a *yellow* replication :ref:`health status `. --------- --------------------------------------------------------------------- ``0-all`` CrateDB will create one replica shard for every node that is available in addition to the node that holds the primary shard. ========= ===================================================================== If you do not specify a ``number_of_replicas``, CrateDB will create one or zero replicas, depending on the number of available nodes at the cluster (e.g., on a single-node cluster, ``number_of_replicas`` will be set to zero to allow fast write operations with the default setting of :ref:`sql-create-table-write-wait-for-active-shards`). You can change the :ref:`sql-create-table-number-of-replicas` setting at any time. .. SEEALSO:: :ref:`CREATE TABLE: WITH clause ` .. _ddl-replication-recovery: Shard recovery ============== CrateDB :ref:`allocates ` each primary and replica shard to a specific node. You can control this behavior by configuring the :ref:`allocation ` settings. If one or more nodes become unavailable (e.g., due to hardware failure or network issues), CrateDB will try to recover a replicated table by doing the following: .. rst-class:: open - For every lost primary shard, locate a replica and promote it to a primary. When CrateDB promotes a replica to primary, it can no longer function as a replica, and so the total number of replicas decreases by one. Because each primary requires a fixed :ref:`sql-create-table-number-of-replicas`, a new replica has to be created (see next item). - For every primary with too few replicas (due to node loss or replica promotion), use the primary shard to :ref:`recover ` the required number of replicas. Shard recovery is one of the features that allows CrateDB to provide continuous `availability`_ and `partition tolerance`_ in exchange for some :ref:`consistency trade-offs `. .. SEEALSO:: `Wikipedia: CAP theorem`_ .. _ddl-replication-underreplication: Underreplication ================ Having more replicas per primary and distributing shards as thinly as possible (i.e., fewer shards per node) can both increase chances of a :ref:`successful recovery ` in the event of node loss. A single node can hold multiple shards belonging to the same table. For example, suppose a table has more shards (primaries and replicas) than nodes available in the cluster. In that case, CrateDB will determine the best way to allocate shards to the nodes available. However, there is never a benefit to allocating multiple copies of the same shard to a single node (e.g., the primary and a replica of the same shard or two replicas of the same shard). For example: .. rst-class:: open - Suppose a single node held the primary and a replica of the same shard. If that node were lost, CrateDB would be unable to use either copy of the shard for :ref:`recovery ` (because both were lost), effectively making the replica useless. - Suppose a single node held two replicas of the same shard. If the primary shard were lost (on a different node), CrateDB would only need one of the replica shards on this node to promote a new primary, effectively making the second replica useless. In both cases, the second copy of the shard serves no purpose. For this reason, CrateDB will never allocate multiple copies of the same shard to a single node. The above rule means that for *one* primary shard and *n* replicas, a cluster must have at least *n + 1* available nodes for CrateDB to fully replicate all shards. When CrateDB cannot fully replicate all shards, the table enters a state known as *underreplication*. CrateDB gives underreplicated tables a *yellow* :ref:`health status `. .. TIP:: The `CrateDB Admin UI`_ provides visual indicators of cluster health that take replication status into account. Alternatively, you can query health information directly from the :ref:`sys.health ` table and replication information from the :ref:`sys.shards ` and :ref:`sys.allocations ` tables. .. _availability: https://en.wikipedia.org/wiki/Availability .. _CrateDB Admin UI: https://crate.io/docs/clients/admin-ui/en/latest/ .. _data redundancy: https://en.wikipedia.org/wiki/Data_redundancy .. _parallelize: https://en.wikipedia.org/wiki/Distributed_computing .. _partition tolerance: https://en.wikipedia.org/wiki/Network_partitioning .. _Wikipedia\: CAP theorem: https://en.wikipedia.org/wiki/CAP_theorem.. _ddl-views: ===== Views ===== .. rubric:: Table of contents .. contents:: :local: .. _views-create: Creating views ============== Views are stored named queries which can be used in place of table names. They're resolved at runtime and can be used to simplify common queries. Views are created using the :ref:`CREATE VIEW statement ` For example, a common use case is to create a view which queries a table with a pre-defined filter:: cr> CREATE VIEW big_mountains AS ... SELECT * FROM sys.summits WHERE height > 2000; CREATE OK, 1 row affected (... sec) .. _views-query: Querying views ============== Once created, views can be used instead of a table in a statement:: cr> SELECT mountain, height FROM big_mountains ORDER BY 1 LIMIT 3; +--------------+--------+ | mountain | height | +--------------+--------+ | Acherkogel | 3008 | | Ackerlspitze | 2329 | | Adamello | 3539 | +--------------+--------+ SELECT 3 rows in set (... sec) .. _views-privileges: Privileges ---------- In order to be able to query data from a view, a user needs to have ``DQL`` privileges on a view. DQL privileges can be granted on a cluster level, on the schema in which the view is contained, or the view itself. Privileges on relations accessed by the view are not necessary. However, it is required, at all times, that the *owner* (the user who created the view), has ``DQL`` privileges on all relations occurring within the view's query definition. A common use case for this is to give users access to a subset of a table without exposing the table itself as well. If the owner ``DQL`` permissions on the underlying relations, a user who has access to the view will no longer be able to query it. .. SEEALSO:: :ref:`Administration: Privileges ` .. _views-drop: Dropping views ============== Views can be dropped using the :ref:`DROP VIEW statement `:: cr> DROP VIEW big_mountains; DROP OK, 1 row affected (... sec).. _autogenerated_sequences_performance: ########################################################### Autogenerated sequences and PRIMARY KEY values in CrateDB ########################################################### As you begin working with CrateDB, you might be puzzled why CrateDB does not have a built-in, auto-incrementing "serial" data type as PostgreSQL or MySQL. As a distributed database, designed to scale horizontally, CrateDB needs as many operations as possible to complete independently on each node without any coordination between nodes. Maintaining a global auto-increment value requires that a node checks with other nodes before allocating a new value. This bottleneck would be hindering our ability to achieve `extremely fast ingestion speeds`_. That said, there are many alternatives available and we can also implement true consistent/synchronized sequences if we want to. ************************************ Using a timestamp as a primary key ************************************ This option involves declaring a column as follows: .. code:: psql BIGINT DEFAULT now() PRIMARY KEY :Pros: Always increasing number - ideal if we need to timestamp records creation anyway :Cons: gaps between the numbers, not suitable if we may have more than one record on the same millisecond ************* Using UUIDs ************* This option involves declaring a column as follows: .. code:: psql TEXT DEFAULT gen_random_text_uuid() PRIMARY KEY :Pros: Globally unique, no risk of conflicts if merging things from different tables/environments :Cons: No order guarantee. Not as human-friendly as numbers. String format may not be applicable to cover all scenarios. Range queries are not possible. ************************ Use UUIDv7 identifiers ************************ `Version 7 UUIDs`_ are a relatively new kind of UUIDs which feature a time-ordered value. We can use these in CrateDB with an UDF_ with the code from `UUIDv7 in N languages`_. :Pros: Same as `gen_random_text_uuid` above but almost sequential, which enables range queries. :Cons: not as human-friendly as numbers and slight performance impact from UDF use ********************************* Use IDs from an external system ********************************* In cases where data is imported into CrateDB from external systems that employ identifier governance, CrateDB does not need to generate any identifier values and primary key values can be inserted as-is from the source system. See `Replicating data from other databases to CrateDB with Debezium and Kafka`_ for an example. ********************* Implement sequences ********************* This approach involves a table to keep the latest values that have been consumed and client side code to keep it up-to-date in a way that guarantees unique values even when many ingestion processes run in parallel. :Pros: Can have any arbitrary type of sequences, (we may for instance want to increment values by 10 instead of 1 - prefix values with a year number - combine numbers and letters - etc) :Cons: Need logic for the optimistic update implemented client-side, the sequences table becomes a bottleneck so not suitable for high-velocity ingestion scenarios We will first create a table to keep the latest values for our sequences: .. code:: psql CREATE TABLE sequences ( name TEXT PRIMARY KEY, last_value BIGINT ) CLUSTERED INTO 1 SHARDS; We will then initialize it with one new sequence at 0: .. code:: psql INSERT INTO sequences (name,last_value) VALUES ('mysequence',0); And we are going to do an example with a new table defined as follows: .. code:: psql CREATE TABLE mytable ( id BIGINT PRIMARY KEY, field1 TEXT ); The Python code below reads the last value used from the sequences table, and then attempts an `optimistic UPDATE`_ with a ``RETURNING`` clause, if a contending process already consumed the identity nothing will be returned so our process will retry until a value is returned, then it uses that value as the new ID for the record we are inserting into the ``mytable`` table. .. code:: python # /// script # requires-python = ">=3.8" # dependencies = [ # "records", # "sqlalchemy-cratedb", # ] # /// import time import records db = records.Database("crate://") sequence_name = "mysequence" max_retries = 5 base_delay = 0.1 # 100 milliseconds for attempt in range(max_retries): select_query = """ SELECT last_value, _seq_no, _primary_term FROM sequences WHERE name = :sequence_name; """ row = db.query(select_query, sequence_name=sequence_name).first() new_value = row.last_value + 1 update_query = """ UPDATE sequences SET last_value = :new_value WHERE name = :sequence_name AND _seq_no = :seq_no AND _primary_term = :primary_term RETURNING last_value; """ if ( str( db.query( update_query, new_value=new_value, sequence_name=sequence_name, seq_no=row._seq_no, primary_term=row._primary_term, ).all() ) != "[]" ): break delay = base_delay * (2**attempt) print(f"Attempt {attempt + 1} failed. Retrying in {delay:.1f} seconds...") time.sleep(delay) else: raise Exception(f"Failed after {max_retries} retries with exponential backoff") insert_query = "INSERT INTO mytable (id, field1) VALUES (:id, :field1)" db.query(insert_query, id=new_value, field1="abc") db.close() .. _extremely fast ingestion speeds: https://cratedb.com/blog/how-we-scaled-ingestion-to-one-million-rows-per-second .. _optimistic update: https://cratedb.com/docs/crate/reference/en/latest/general/occ.html#optimistic-update .. _replicating data from other databases to cratedb with debezium and kafka: https://cratedb.com/blog/replicating-data-from-other-databases-to-cratedb-with-debezium-and-kafka .. _udf: https://cratedb.com/docs/crate/reference/en/latest/general/user-defined-functions.html .. _uuidv7 in n languages: https://github.com/nalgeon/uuidv7/blob/main/src/uuidv7.cratedb .. _version 7 uuids: https://datatracker.ietf.org/doc/html/rfc9562#name-uuid-version-7.. highlight:: psql .. _sql_occ: ============================== Optimistic Concurrency Control ============================== .. rubric:: Table of contents .. contents:: :local: Introduction ============ Even though CrateDB does not support transactions, `Optimistic Concurrency Control`_ can be achieved by using the internal system columns :ref:`_seq_no ` and :ref:`_primary_term `. Every new primary shard row has an initial sequence number of ``0``. This value is increased by ``1`` on every insert, delete or update operation the primary shard executes. The primary term will be incremented when a shard is promoted to primary so the user can know if they are executing an update against the most up to date cluster configuration. .. Hidden: update some documents to raise their ``_seq_no`` values.:: cr> CREATE TABLE sensors ( ... id text primary key, ... type text, ... last_verification timestamp ... ); CREATE OK, 1 row affected (... sec) cr> INSERT INTO sensors (id, type, last_verification) VALUES ('ID1', 'DHT11', null); INSERT OK, 1 row affected (... sec) cr> INSERT INTO sensors (id, type, last_verification) VALUES ('ID2', 'DHT21', null); INSERT OK, 1 row affected (... sec) cr> refresh table sensors; REFRESH OK, 1 row affected (... sec) It's possible to fetch the ``_seq_no`` and ``_primary_term`` by selecting them:: cr> SELECT id, type, _seq_no, _primary_term FROM sensors ORDER BY 1; +-----+-------+---------+---------------+ | id | type | _seq_no | _primary_term | +-----+-------+---------+---------------+ | ID1 | DHT11 | 0 | 1 | | ID2 | DHT21 | 0 | 1 | +-----+-------+---------+---------------+ SELECT 2 rows in set (... sec) These ``_seq_no`` and ``_primary_term`` values can now be used on updates and deletes. .. NOTE:: Optimistic concurrency control only works using the ``=`` :ref:`operator `, checking for the exact ``_seq_no`` and ``_primary_term`` your update or delete is based on. Optimistic update ================= Querying for the correct ``_seq_no`` and ``_primary_term`` ensures that no concurrent update and cluster configuration change has taken place:: cr> UPDATE sensors SET last_verification = '2020-01-10 09:40' ... WHERE ... id = 'ID1' ... AND "_seq_no" = 0 ... AND "_primary_term" = 1; UPDATE OK, 1 row affected (... sec) Updating a row with a wrong or outdated sequence number or primary term will not execute the update and results in 0 affected rows:: cr> UPDATE sensors SET last_verification = '2020-01-10 09:40' ... WHERE ... id = 'ID1' ... AND "_seq_no" = 42 ... AND "_primary_term" = 5; UPDATE OK, 0 rows affected (... sec) Optimistic delete ================= The same can be done when deleting a row:: cr> DELETE FROM sensors WHERE id = 'ID2' ... AND "_seq_no" = 0 ... AND "_primary_term" = 1; DELETE OK, 1 row affected (... sec) Known limitations ================= - The ``_seq_no`` and ``_primary_term`` columns can only be used when specifying the whole primary key in a query. For example, the query below is not possible with the database schema used for testing, because ``type`` is not declared as a primary key:: cr> DELETE FROM sensors WHERE type = 'DHT11' ... AND "_seq_no" = 3 ... AND "_primary_term" = 1; UnsupportedFeatureException["_seq_no" and "_primary_term" columns can only be used together in the WHERE clause with equals comparisons and if there are also equals comparisons on primary key columns] - In order to use the optimistic concurrency control mechanism, both the ``_seq_no`` and ``_primary_term`` columns need to be specified. It is not possible to only specify one of them. For example, the query below will result in an error:: cr> DELETE FROM sensors WHERE id = 'ID1' AND "_seq_no" = 3; VersioningValidationException["_seq_no" and "_primary_term" columns can only be used together in the WHERE clause with equals comparisons and if there are also equals comparisons on primary key columns] - There is an exception to this behaviour, when the ``WHERE`` clause contains complex filtering and/or lots of Primary Key values. You can find more details :ref:`here `. .. NOTE:: Both ``DELETE`` and ``UPDATE`` commands will return a row count of ``0``, if the given required version does not match the actual version of the relevant row. .. _Optimistic Concurrency Control: https://en.wikipedia.org/wiki/Optimistic_concurrency_control.. _sharding_guide: .. _sharding-performance: ========================== Sharding Performance Guide ========================== This document is a sharding best practice guide for CrateDB. A brief recap: CrateDB tables are split into a configured number of shards, and then these shards are distributed across the cluster. Figuring out how many shards to use for your tables requires you to think about the type of data you're processing, the types of queries you're running, and the type of hardware you're using. .. NOTE:: This guide assumes you know the basics. If you are looking for an intro to sharding, see :ref:`sharding `. .. rubric:: Table of contents .. contents:: :local: Optimising for query performance ================================ .. _sharding-under-allocation: Under-allocation is bad ----------------------- .. CAUTION:: If you have fewer shards than CPUs in the cluster, this is called *under-allocation*, and it means you're not getting the best performance out of CrateDB. Whenever possible, CrateDB will parallelize query workloads and distribute them across the whole cluster. The more CPUs this query workload can be distributed across, the faster the query will run. To increase the chances that a query can be parallelized and distributed maximally, there should be at least as many shards for a table than there are CPUs in the cluster. This is because CrateDB will automatically balance shards across the cluster so that each node contains as few shards as possible. In summary: the smaller your shards are, the more of them you will have, and so the more likely it is that they will be distributed across the whole cluster, and hence across all of your CPUs, and hence the faster your queries will run. Significant over-allocation is bad ---------------------------------- .. CAUTION:: If you have more shards per table than CPUs, this is called *over-allocation*. A little over-allocation is desirable. But if you significantly over-allocate your shards per table, you will see performance degradation. When you have slightly more shards per table than CPUs, you ensure that query workloads can be parallelized and distributed maximally, which in turn ensures maximal query performance. However, if most nodes have more shards per table than they have CPUs, you could actually see performance degradation. Each shard comes with a cost in terms of open files, RAM, and CPU cycles. Smaller shards also means small shard indexes, which can adversely affect computed search term relevance. For performance reasons, one thousand shards per table per node is considered the highest recommended configuration. If you exceed this you will experience a failing cluster check. Balancing allocation -------------------- Finding the right balance when it comes to sharding will vary on a lot of things. And while it's generally advisable to slightly over-allocate, it's also a good idea to benchmark your particular setup so as to find the sweet spot. If you don't manually set the number of shards per table, CrateDB will make a best guess, based on the assumption that your nodes have two CPUs each. .. TIP:: For the purposes of calculating how many shards a table should be clustered into, you can typically ignore replica partitions as these are not usually queried across for reads. .. CAUTION:: If you are using :ref:`partitioned tables `, note that each partition is clustered into as many shards as you configure for the table. For example, a table with four shards and two partitions will have eight shards that can be commonly queried across. But a query that only touches one partition will only query across four shards. How this factors into balancing your shard allocation will depend on the types of queries you intend to run. .. _sharding_ingestion: Optimising for ingestion performance ==================================== As with `Optimising for query performance`_, when doing heavy ingestion, it is good to cluster a table across as many nodes as possible. However, `we have found`_ that ingestion throughput can often increase as the table shard per CPU ratio on each node *decreases*. Ingestion throughput typically varies on: data volume, individual payload sizes, batch insert size, and the hardware. In particular: using solid-state drives (SSDs) instead of hard-disk drives (HDDs) can massively increase ingestion throughput. It's a good idea to benchmark your particular setup so as to find the sweet spot. .. _we have found: https://cratedb.com/blog/big-cluster-insights-ingesting.. _performance-optimization: ######################## Query Optimization 101 ######################## This article covers some essential principles for optimizing queries in CrateDB while avoiding the most common pitfalls. The patterns are relevant to both the troubleshooting of slow queries and the proactive tuning of CrateDB deployments, and they show how small adjustments to filters, data transformations, and schemas can yield dramatic improvements in execution speed and resource utilization. .. _group-early-filtering: ************************************ Early Filtering and Data Reduction ************************************ This section focuses on minimizing data processed early in queries to reduce overhead. .. _filtering-early: Do all filtering as soon as possible ==================================== Sometimes it may be tempting to define some VIEWs, some CTEs, do some JOINs, and only filter results at the end, but in this context the optimizer may lose track of how the fields we are filtering on relate to the indexes on the actual tables. Whenever there is an opportunity to filter data immediately next to the ``FROM`` clause, try to narrow down results as early as possible. See `using common table expressions to speed up queries`_ for an example. .. _select-star: Avoid ``SELECT *`` ================== CrateDB is a columnar database. The fewer columns you specify in a ``SELECT`` clause, the less data CrateDB needs to read from disk. .. code:: sql -- Avoid selecting all columns SELECT * FROM customers; -- Instead, select explicitly the subset of columns you need SELECT customerid, country FROM customers; .. _minimise-result-sets: Avoid large result sets ======================= Be aware of the number of rows you are returning in a ``SELECT`` query. Analytical databases, such as CrateDB, excel at processing large data sets and returning small to medium-sized result sets. Serializing, transporting them over the network, and deserializing large result sets is expensive. When dealing with large result sets in the range of several hundred thousand records, consider whether your application needs the whole result set at once. Use `cursors`_ or ``LIMIT``/``OFFSET`` to fetch data in batches. See also `Fetching large result sets from CrateDB`_ for examples. .. _propagate-limit: Propagate LIMIT clauses when applicable ======================================= Similarly to the above, we may have for instance a ``LIMIT 10`` at the end of the query and to get there it may have been sufficient to only pull 10 records (or some other number of records) at an earlier stage from some given table. If that is the case duplicate or move (depending on the specific query) the ``LIMIT`` clause to the relevant place. In some cases, we may not know how many rows we need in the intermediate working sets but we know that there will be 10 records on the last day. Doing filtering early will help the optimizer and can protect the database from accidentally processing years of data. By not filtering early, the load on your cluster will increase tremendously. So for instance instead of: .. code:: sql SELECT factory_metadata.factory_name, device_data.device_name, device_data.reading_value FROM device_data INNER JOIN factory_metadata ON device_data.factory_id = factory_metadata.factory_id WHERE reading_time BETWEEN '2024-01-01' AND '2025-01-01' LIMIT 10; do: .. code:: sql WITH filtered_device_data AS ( SELECT device_data.factory_id, device_data.device_name, device_data.reading_value FROM device_data WHERE /* We are sure one month of data is sufficient to find 10 results and it may help with partition pruning */ reading_time BETWEEN '2024-12-01' AND '2025-01-01' LIMIT 10 ) SELECT factory_metadata.factory_name, filtered_device_data.device_name, filtered_device_data.reading_value FROM filtered_device_data INNER JOIN factory_metadata ON filtered_device_data.factory_id = factory_metadata.factory_id; .. _filter-with-array-expressions: Use filters with array expressions when filtering on the output of UNNEST ========================================================================= On denormalized data sets, you may observe records including columns storing arrays of objects. You may want to unnest the array in a subquery or CTE and later filter on a property of the OBJECTs. The next statement (in versions of CrateDB < 6.0.0) will result in every row in the table (not filtered with other conditions) being read and unnested, to check if it meets the criteria on ``field1``. .. code:: sql SELECT * FROM ( SELECT UNNEST(my_array_of_objects) obj FROM my_table ) WHERE obj['field1'] = 1; However, CrateDB can do a lot better than this if we add an additional condition like this: .. code:: sql SELECT * FROM ( SELECT UNNEST(my_array_of_objects) obj FROM my_table WHERE 1 = ANY(my_array_of_objects['field1']) ) AS subquery WHERE obj['field1'] = 1; CrateDB leverages indexes to only unnest the relevant records from ``my_table`` which can make a huge difference. .. _group-efficient-query-structure: ****************************************** Efficient Query Structure and Constructs ****************************************** This section focuses on optimizing SQL logic by prioritizing efficient syntax and avoiding redundant operations. .. _only-sort-when-needed: Only sort data when needed ========================== Indexing in CrateDB is optimized to support filtering and aggregations without requiring expensive defragmentation operations, but it is not optimized for sorting​. Maintaining a sorted index would slow down ingestion, that is why​ other analytical database systems like Cassandra and Redshift make similar trade-offs​. This means that when an ``ORDER BY`` operation is requested, the whole dataset needs to be loaded into the main memory on the relevant cluster node to be sorted. For this reason, it is important to not request ``ORDER BY`` operations when not actually needed, and most importantly, not on tables of large cardinalities without aggregating records beforehand. On the other hand, of course it is no problem to sort a few thousand rows in the final stage of a ``SELECT`` operation, but we need to avoid requesting sort operations over millions of rows. Consider leveraging filters and aggregations like ``max_by`` and ``min_by`` to limit the scope of ``ORDER BY`` operations, or avoid them altogether when possible. So for instance instead of: .. code:: sql SELECT reading_time, reading_value FROM device_data WHERE reading_time BETWEEN '2024-01-01' AND '2025-01-01' ORDER BY reading_time DESC LIMIT 10; use: .. code:: sql SELECT reading_time, reading_value FROM device_data WHERE reading_time BETWEEN '2024-12-20' AND '2025-01-01' ORDER BY reading_time DESC LIMIT 10; .. _format-as-last-step: Format output as a last step ============================ In many cases, data may be stored in an efficient format, but we want to transform it to make it more human-readable in the output of the query. To accomodate such situations, we may use `scalar functions`_ such as ``date_format`` or ``timezone``. Sometimes queries apply these transformations in an intermediate step and later do further operations like filtering on the transformed values. CrateDB's query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans. Based on the query scenario/situation, it is always aiming to use existing indexes on the original data for maximum efficiency. However, there is always a chance that some particular clause in the query expression prevents the optimizer from selecting an optimal plan, ending up applying the transformation on thousands or millions of records that later would be discarded anyway. So, whenever it makes sense, we want to ensure these transformations are only applied after the database has already worked out the final result set to be sent back to the client. So instead of: .. code:: sql WITH mydata AS ( SELECT DATE_FORMAT(device_data.reading_time) AS formatted_reading_time, device_data.reading_value FROM device_data ) SELECT * FROM mydata WHERE formatted_reading_time LIKE '2025%'; use: .. code:: sql SELECT DATE_FORMAT(device_data.reading_time) AS formatted_reading_time, device_data.reading_value FROM device_data WHERE device_data.reading_time BETWEEN '2025-01-01' AND '2026-01-01' .. _replace-case: Replace CASE in expressions used for filtering, JOINs, grouping, etc ==================================================================== It is not always obvious to the optimizer what we may be trying to do with a ``CASE`` expression (see for instance `Shortcut CASE evaluation Issue 16022`_). If you are using CASE expression for “formatting” see the previous point about formatting output as late as possible, but if you are using a CASE expression as part of a filter of other operation consider replacing it with an equivalent expression, for instance: .. code:: sql SELECT SUM(a) as count_greater_than_10,... FROM ( SELECT CASE WHEN field1 > 10 THEN 1 ELSE 0 END , ... FROM mytable ... ) subquery ...; can be rewritten as .. code:: sql SELECT COUNT(field1) FILTER (WHERE field1 > 10) as count_greater_than_10 FROM mytable; And .. code:: postgresql SELECT * FROM mytable WHERE CASE WHEN $1 = 'ALL COUNTRIES' THEN true WHEN $1 = mytable.country AND $2 = 'ALL CITIES' THEN true ELSE $1 = mytable.country AND $2 = mytable.city END; can be rewritten as .. code:: postgresql SELECT * FROM mytable WHERE ($1 = 'ALL COUNTRIES') OR ($1 = mytable.country AND $2 = 'ALL CITIES') OR ($1 = mytable.country AND $2 = mytable.city) (the exact replacement expressions of course depend on the semantics of each case) .. _groups-instead-distinct: Use groupings instead of DISTINCT ================================= (Reference: `Issue 13818`_) Instead of .. code:: sql SELECT DISTINCT country FROM customers; use .. code:: sql SELECT country FROM customers GROUP BY country; and instead of .. code:: sql SELECT COUNT(DISTINCT a) FROM t; use .. code:: sql SELECT COUNT(a) FROM ( SELECT a FROM t GROUP BY a ) tmp; .. _subqueries-instead-groups: Use subqueries instead of GROUP BY if the groups are already known ================================================================== Consider the following query: .. code:: sql SELECT customerid, SUM(order_amount) AS total FROM customer_orders GROUP BY customerid; This looks simple but to execute it CrateDB needs to keep the full result set in memory for all groups. If we already know what the groups will be we can use correlated subqueries instead: .. code:: sql SELECT customerid, (SELECT SUM(order_amount) FROM customer_orders WHERE customer_orders.customerid = customers.customerid ) AS total FROM customers; .. _group-large-and-complex-queries: ************************************ Handling Large and Complex Queries ************************************ This section discusses strategies for breaking down complex operations on large datasets into manageable steps. .. _batch-operations: Batch operations ================ If you need to perform lots of UPDATEs or expensive INSERTs from SELECT, consider exploring different settings for the `overload protection`_ or `thread pool sizing`_ which can be used to fine tune the performance for these operations. Otherwise, if you only need to run it once and performance is not critical, consider using small batches instead, where the operations are done on groups of records each time. So for instance instead of doing: .. code:: sql UPDATE mytable SET field1 = field1 + 1; consider a different approach such as: .. code:: shell for id in {1..100}; do crash -c "UPDATE mytable SET field1 = field1 + 1 WHERE customer_id = $id;" done .. _pagination-filters: Paginate on filters instead of results ====================================== For instance instead of .. code:: sql SELECT deviceid, AVG(field1) FROM device_data GROUP BY deviceid LIMIT 1000 OFFSET 5000; We can do something like .. code:: sql WITH devices AS ( SELECT deviceid FROM devices LIMIT 5 OFFSET 25 ) SELECT deviceid, AVG(field1) FROM device_data WHERE device_data.deviceid IN (SELECT devices.deviceid FROM devices) GROUP BY deviceid; .. _staging-tables: Use staging tables for intermediate results if you are doing a lot of JOINs =========================================================================== If you have many CTEs or VIEWs with a need to JOIN them, it can be benefical to query them individually, store intermediate results into dedicated tables, and then use these tables for JOINing. While there is a cost in writing to disk and reading data back, the whole operation can benefit from indexing and from giving the optimizer more straightforward execution plans, to enable it optimizing for better parallel execution using multiple cluster nodes. .. _group-schema-and-function-optimization: ********************************** Schema and Function Optimization ********************************** This section focuses on schema design and function usage to streamline performance. .. _consider-generated-columns: Consider generated columns ========================== If you frequently find yourself extracting information from fields and then using this extracted data on filters or aggregations, it can be good to consider doing this operation on ingestion with a `generated column`_ . In this way the value we need for filtering and aggregations can be indexed. This involves a trade-off between storage space and query performance, evaluate the frequency and execution times of these queries with the additional storage requirements of storing the generated value. See `Using regex comparisons and other features for inspection of logs`_ for an example. .. _udf-right-context: Be mindful of UDFs, leverage them in the right contexts, but only in the right contexts ======================================================================================= When using user-defined functions (UDFs), two important details relevant for performance aspects need to be considered. #. Once values are processed by an UDF, the database engine will load results into memory, and will not be able to leverage indexes on the underlying fields any longer. In this spirit, please apply the relevant general considerations about delaying formatting as much as possible. #. UDFs run on a JavaScript virtual machine on a single thread, so they can have an impact on performance as relevant operations cannot be parallelized. However, some operations may be more straightforward to do in JavaScript than SQL. .. _group-filter-expression-optimizations: This section discusses expressions that improve filter efficiency and handling of specific data Structures. ************************************ Filter and Expression Optimization ************************************ .. _positive-filters: Avoid expression negation in filters ==================================== Positive filter expressions can directly leverage indexing. With negative expressions, the optimizer may be able to still use indexes, but this may not always happen and the optimizer might not rewrite the query optimally. Explicitly using positive conditions removes ambiguity and ensures the most efficient path is chosen. So instead of: .. code:: sql SELECT customerid, status FROM customers_table WHERE NOT (customerid <= 2) AND NOT (status = 'inactive'); We can rewrite this as: .. code:: sql SELECT customerid, status FROM customers_table WHERE customerid > 3 AND status = 'active'; .. _use-null-or-empty: Use the special null_or_empty function with OBJECTs and ARRAYs when relevant ============================================================================ CrateDB has a special scalar function called null_or_empty_ , using this in filter conditions against OBJECTs and ARRAYs is much faster than using an ``IS NULL`` clause, if allowing empty objects and arrays is acceptable. So instead of: .. code:: sql SELECT ... FROM mytable WHERE array_column IS NULL OR array_column = []; We can rewrite this as: .. code:: sql SELECT ... FROM mytable WHERE null_or_empty(array_column); .. _group-performance-analysis: ****************************************** Performance Analysis and Execution Plans ****************************************** .. _execution-plans: Review execution plans ====================== If a query is slow but still completes in a certain amount of time, we can use `EXPLAIN ANALYZE`_ to get a detailed execution plan. The main thing to watch for on these is ``MatchAllDocsQuery`` and ``GenericFunctionQuery``. These operations are full table scans, so you may want to review if that is expected in your query (you may actually intentionally be pulling all records from a table with a list of factory sites for instance) or if this is about a filter that is not being pushed down properly. .. _explain analyze: https://cratedb.com/docs/crate/reference/en/latest/sql/statements/explain.html .. _fetching large result sets from cratedb: https://community.cratedb.com/t/fetching-large-result-sets-from-cratedb/1270 .. _overload protection: https://cratedb.com/docs/crate/reference/en/latest/config/cluster.html#overload-protection .. _thread pool sizing: https://cratedb.com/docs/crate/reference/en/latest/config/cluster.html#thread-pools .. _generated column: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/generated-columns.html .. _issue 13818: https://github.com/crate/crate/issues/13818 .. _null_or_empty: https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html#null-or-empty-object .. _scalar functions: https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html .. _shortcut case evaluation issue 16022: https://github.com/crate/crate/issues/16022 .. _using common table expressions to speed up queries: https://community.cratedb.com/t/using-common-table-expressions-to-speed-up-queries/1719 .. _using regex comparisons and other features for inspection of logs: https://community.cratedb.com/t/using-regex-comparisons-and-other-advanced-database-features-for-real-time-inspection-of-web-server-logs/1564 .. _cursors: https://cratedb.com/docs/crate/reference/en/latest/sql/statements/declare.html.. _performance-scaling: ################## Design for scale ################## This article explores critical design considerations to successfully scale CrateDB in large production environments to ensure performance and reliability as workloads grow. .. _mindful-of-memory: ******************************* Be mindful of memory capacity ******************************* In CrateDB, operations requiring a working set like groupings, aggregations, and sorting are performed fully in memory without spilling over to disk. Sometimes you may have a query that leads to a sub-optimal execution plan requiring lots of memory. If you are coming to CrateDB from other database systems, your experience may be that these queries will proceed to run taking longer than required and impacting other workloads in the meanwhile. Sometimes this effect may be obvious if a query takes a lot of resources and runs for a long time, other times it may go unnoticed if a query that could complete in say 100 milliseconds takes one hundred times longer, 10 seconds, but the users put up with it without reporting to you. If a query would require more heap memory than the interested nodes have available the query will fail with a particular type of error message that we call a ``CircuitBreakerException``. This is a fail-fast approach as we quickly see there is an issue and can optimize the query to get the best performance, without impacting other workloads. Please take a look at :ref:`Query Optimization 101 ` for strategies to optimize your queries when you encounter this situation. .. _reading-lots-of-records: ************************* Reading lots of records ************************* When the HTTP endpoint is used CrateDB will prepare the entire response in memory before sending it to the client. When the PostgreSQL protocol is used CrateDB attempts to stream the results but in many cases it still needs to bring all rows to the query handler node first. So we should always limit how many rows we request at a time, see `Fetching large result sets from CrateDB`_. .. _number-of=shards: ****************** Number of shards ****************** In CrateDB data in tables and partitions is distributed in storage units that we call shards. If we do not specify how many shards we want for a table/partition CrateDB will derive a default from the number of nodes. CrateDB also has replicas of data and this results in additional shards in the cluster. Having too many or too few shards has performance implications, so it is very important to get familiar with the :ref:`Sharding Performance Guide `. But in particular, there is a soft limit of 1000 shards per node; so table schemas, partitioning strategy, and number of nodes need to be planned to stay well below this limit, one strategy can be to aim for a configuration where even if one node in the cluster is lost the remaining nodes would still have less than 1000 shards. If this was not considered when initially defining the tables we have the following considerations: - changing the partitioning strategy requires creating a new table and copying over the data - the easiest way to change the number of shards on a partitioned table is to do it for new shards only with the ``ALTER TABLE ONLY`` command - see also `Changing the number of shards`_ .. _amount-of-indexed-columns: ************************************* Number of indexed fields in OBJECTs ************************************* ``OBJECT`` columns are ``DYNAMIC`` by default and CrateDB indexes all their fields, providing excellent query performance without requiring manual indexing. However, excessive indexing can impact storage, write speed, and resource utilization. - All fields in OBJECTs are automatically indexed when inserted. - CrateDB optimizes indexing using Lucene-based columnar storage. - A soft limit of 1,000 total indexed columns and OBJECT fields per table exists. - Going beyond this limit may impact performance. In cases with many fields and columns, it is advised to determine if some OBJECTs or nested parts of them need to be indexed, and use the `ignored column policy`_ where applicable. .. _section-joins: ******* JOINs ******* CrateDB is a lot better at JOINs than many of our competitors and is getting better at every release, but JOINs in distributed databases are tricky to optimize, so in many cases queries involving JOINs may need a bit of tweaking. See `Using common table expressions to speed up queries`_ .. _changing the number of shards: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/alter-table.html#alter-shard-number .. _fetching large result sets from cratedb: https://community.cratedb.com/t/fetching-large-result-sets-from-cratedb/1270 .. _ignored column policy: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/data-types.html#ignored .. _using common table expressions to speed up queries: https://community.cratedb.com/t/using-common-table-expressions-to-speed-up-queries/1719(integrate)= # Integrations You have a variety of options to connect and integrate 3rd-party applications, mostly using [CrateDB's PostgreSQL interface]. This documentation section lists applications, frameworks, and libraries, which can be used together with CrateDB, and outlines how to use them optimally. ```{toctree} :maxdepth: 2 ide orm df etl/index cdc/index metrics/index visualize/index bi/index lineage/index testing ``` :::{tip} Please also visit the [Overview of CrateDB integration tutorials]. ::: [CrateDB's PostgreSQL interface]: inv:crate-reference#interface-postgresql [Overview of CrateDB integration tutorials]: https://community.cratedb.com/t/overview-of-cratedb-integration-tutorials/1015We've created many integration-focused tutorials to help you use CrateDB with other awesome tools and libraries.👇 All tutorials require the working installation of CrateDB. |**Tool** | **Articles/Tutorials** | C| |--- | --- | ---| |[Apache Airflow](https://airflow.apache.org/) / [Astronomer](https://www.astronomer.io/) | - https://community.cratedb.com/t/cratedb-and-apache-airflow-automating-data-export-to-s3/901
- https://community.cratedb.com/t/cratedb-and-apache-airflow-implementation-of-data-retention-policy/913
- https://community.cratedb.com/t/cratedb-and-apache-airflow-building-a-data-ingestion-pipeline/926
- https://community.cratedb.com/t/cratedb-and-apache-airflow-building-a-hot-cold-storage-data-retention-policy/934 | | |[Apache Arrow](https://arrow.apache.org) | https://community.cratedb.com/t/import-parquet-files-into-cratedb-using-apache-arrow-and-sqlalchemy/1161 | | |[Apache Kafka](https://kafka.apache.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/kafka-connect.html | | |[Apache NiFi](https://nifi.apache.org/) | https://community.cratedb.com/t/connecting-to-cratedb-from-apache-nifi/647 | | |[Apache Spark](https://spark.apache.org/) | - https://community.cratedb.com/t/getting-started-with-apache-spark-and-cratedb-a-step-by-step-tutorial/1595
- https://community.cratedb.com/t/introduction-to-azure-databricks-with-cratedb/764
- https://github.com/crate/cratedb-examples/tree/main/by-dataframe/spark/scala-http | | |[Apache Superset](https://github.com/apache/superset) / [Preset](https://preset.io/) | - https://community.cratedb.com/t/set-up-apache-superset-with-cratedb/1716
- https://crate.io/blog/use-cratedb-and-apache-superset-for-open-source-data-warehousing-and-visualization
- [Introduction to Time-Series Visualization in CrateDB and Superset](https://crate.io/blog/introduction-to-time-series-visualization-in-cratedb-and-superset) | | |[Balena](https://www.balena.io/) | https://community.cratedb.com/t/deploying-cratedb-on-balena-io/1067 | | |[Cluvio](https://www.cluvio.com/) | https://community.cratedb.com/t/data-analysis-with-cluvio-and-cratedb/1571 | | |[Dapr](https://dapr.io/) | https://community.cratedb.com/t/connecting-to-cratedb-from-dapr/660 | | |[DataGrip](https://www.jetbrains.com/datagrip/) | https://cratedb.com/docs/guide/integrate/datagrip/ | | |[Datashader](https://datashader.org/) | [CrateDB Time Series Exploration and Visualization](https://github.com/crate/cratedb-examples/tree/amo/cloud-datashader/topic/timeseries/explore) | | |[Dask](https://www.dask.org/) | https://community.cratedb.com/t/guide-to-efficient-data-ingestion-to-cratedb-with-pandas-and-dask/1482 | | |[DBeaver](https://dbeaver.io/about/) | https://crate.io/blog/cratedb-dbeaver | | |[dbt](https://github.com/dbt-labs/dbt-core) | https://community.cratedb.com/t/using-dbt-with-cratedb/1566 | | |[Debezium](https://debezium.io/) | https://community.cratedb.com/t/replicating-data-from-other-databases-to-cratedb-with-debezium-and-kafka/1388 | | |[Explo](https://www.explo.co/) | https://crate.io/blog/introduction-to-time-series-visualization-in-cratedb-and-explo | | |[JMeter](https://jmeter.apache.org) | https://community.cratedb.com/t/jmeter-jdbc-connection-to-cratedb/1051/2?u=jayeff | | |[Grafana](https://grafana.com/) | - https://crate.io/blog/visualizing-time-series-data-with-grafana-and-cratedb
- https://community.cratedb.com/t/monitoring-an-on-premises-cratedb-cluster-with-prometheus-and-grafana/1236 | | |[Kestra.io](https://kestra.io/) | https://community.cratedb.com/t/guide-to-cratedb-data-pipelines-with-kestra-io/1400 | | |[LangChain](https://www.langchain.com/) | https://community.cratedb.com/t/how-to-set-up-langchain-with-cratedb/1576 | | |[Locust](https://locust.io) | https://community.cratedb.com/t/loadtesting-cratedb-using-locust/1686 | | |[Meltano](https://meltano.com/) | [Meltano Examples](https://github.com/crate/cratedb-examples/tree/amo/meltano/framework/singer-meltano) | | |[Metabase](https://www.metabase.com/) | - https://community.cratedb.com/t/visualizing-data-with-metabase/1401
- https://community.cratedb.com/t/demo-of-metabase-and-cratedb-getting-started/1436 | | |[Node-RED](https://nodered.org/) | https://community.cratedb.com/t/ingesting-mqtt-messages-into-cratedb-using-node-red/803 | | |[pandas](https://pandas.pydata.org/) | - https://community.cratedb.com/t/from-data-storage-to-data-analysis-tutorial-on-cratedb-and-pandas-2/1440
- https://community.cratedb.com/t/automating-financial-data-collection-and-storage-in-cratedb-with-python-and-pandas/916
- https://community.cratedb.com/t/importing-parquet-files-into-cratedb-using-apache-arrow-and-sqlalchemy/1161
- https://community.cratedb.com/t/guide-to-efficient-data-ingestion-from-pandas-to-cratedb/1541 | | |[PowerBI](https://powerbi.microsoft.com/en-us/) | https://crate.io/docs/crate/howtos/en/latest/integrations/powerbi-desktop.html
https://crate.io/docs/crate/howtos/en/latest/integrations/powerbi-gateway.html | | |[Prefect](https://www.prefect.io/) | https://community.cratedb.com/t/building-seamless-data-pipelines-made-easy-combining-prefect-and-cratedb/1555 | | |[Prometheus](https://prometheus.io/) | - https://community.cratedb.com/t/cratedb-and-prometheus-for-long-term-metrics-storage/1012
- https://community.cratedb.com/t/monitoring-an-on-premises-cratedb-cluster-with-prometheus-and-grafana/1236 | | |[PyCaret](https://pycaret.org/) | [AutoML with PyCaret and CrateDB](https://github.com/crate/cratedb-examples/tree/main/topic/machine-learning/automl) | | |[R](https://www.r-project.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/r.html | | |[Rill](https://www.rilldata.com/) | https://community.cratedb.com/t/introducing-rill-and-bi-as-code-with-cratedb-cloud/1718 | | |[Rsyslog](https://www.rsyslog.com/) | https://community.cratedb.com/t/storing-server-logs-on-cratedb-for-fast-search-and-aggregations/1562 | | |[SQLPad](https://crate.io/blog/use-cratedb-with-sqlpad-as-a-self-hosted-query-tool-and-visualizer) | https://crate.io/blog/use-cratedb-with-sqlpad-as-a-self-hosted-query-tool-and-visualizer | | |[StreamSets](https://crate.io/docs/crate/howtos/en/latest/integrations/streamsets.html) | https://crate.io/docs/crate/howtos/en/latest/integrations/streamsets.html | | |[Tableau](https://www.tableau.com/) | https://community.cratedb.com/t/using-cratedb-with-tableau/1192 | | |[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) | https://crate.io/blog/use-cratedb-with-telegraf-an-agent-for-collecting-reporting-metrics | | |[TensorFlow](https://www.tensorflow.org/) | https://crate.io/docs/crate/howtos/en/latest/integrations/ml-dist.html | | |[Terraform](https://www.terraform.io/) | https://community.cratedb.com/t/deploying-cratedb-to-the-cloud-via-terraform/849 | | |[Trino](https://trino.io/) | https://community.cratedb.com/t/connecting-to-cratedb-using-trino/993 | |
.. highlight:: sh .. _interface-http: ============= HTTP endpoint ============= CrateDB provides a HTTP Endpoint that can be used to submit SQL queries. The endpoint is accessible under ``/_sql``. SQL statements are sent to the ``_sql`` endpoint in ``json`` format, whereby the statement is sent as value associated to the key ``stmt``. .. SEEALSO:: :ref:`dml` A simple ``SELECT`` statement can be submitted like this:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' \ ... -d '{"stmt":"select name, position from locations order by id limit 2"}' { "cols": [ "name", "position" ], "rows": [ [ "North West Ripple", 1 ], [ "Outer Eastern Rim", 2 ] ], "rowcount": 2, "duration": ... } .. NOTE:: We're using a simple command line invocation of ``curl`` here so you can see how to run this by hand in the terminal. For the rest of the examples in this document, we use `here documents`_ (i.e. ``EOF``) for multi line readability. .. rubric:: Table of contents .. contents:: :local: .. _http-param-substitution: Parameter substitution ====================== In addition to the ``stmt`` key the request body may also contain an ``args`` key which can be used for SQL parameter substitution. The SQL statement has to be changed to use placeholders where the values should be inserted. Placeholders can either be numbered (in the form of ``$1``, ``$2``, etc.) or unnumbered using a question mark ``?``. The placeholders will then be substituted with values from an array that is expected under the ``args`` key:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' -d@- <<- EOF ... { ... "stmt": ... "select date,position from locations ... where date <= \$1 and position < \$2 order by position", ... "args": ["1979-10-12", 3] ... } ... EOF { "cols": [ "date", "position" ], "rows": [ [ 308534400000, 1 ], [ 308534400000, 2 ] ], "rowcount": 2, "duration": ... } .. NOTE:: In this example the placeholders start with an backslash due to shell escaping. .. WARNING:: Parameter substitution must not be used within subscript notation. For example, ``column[?]`` is not allowed. The same query using question marks as placeholders looks like this:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' -d@- <<- EOF ... { ... "stmt": ... "select date,position from locations ... where date <= ? and position < ? order by position", ... "args": ["1979-10-12", 3] ... } ... EOF { "cols": [ "date", "position" ], "rows": [ [ 308534400000, 1 ], [ 308534400000, 2 ] ], "rowcount": 2, "duration": ... } .. NOTE:: With some queries the row count is not ascertainable. In this cases ``rowcount`` is ``-1``. .. _http-default-schema: Default schema ============== It is possible to set a default schema while querying the CrateDB cluster via ``_sql`` end point. In such case the HTTP request should contain the ``Default-Schema`` header with the specified schema name:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' \ ... -H 'Default-Schema: doc' -d@- <<- EOF ... { ... "stmt":"select name, position from locations order by id limit 2" ... } ... EOF { "cols": [ "name", "position" ], "rows": [ [ "North West Ripple", 1 ], [ "Outer Eastern Rim", 2 ] ], "rowcount": 2, "duration": ... } If the schema name is not specified in the header, the default ``doc`` schema will be used instead. .. _http-column-types: Column types ============ CrateDB can respond a list ``col_types`` with the data type ID of every responded column. This way one can know what exact data type a column is holding. In order to get the list of column data types, a ``types`` query parameter must be passed to the request:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql?types' -d@- <<- EOF ... { ... "stmt": ... "select date, position from locations ... where date <= \$1 and position < \$2 order by position", ... "args": ["1979-10-12", 3] ... } ... EOF { "cols": [ "date", "position" ], "col_types": [ 11, 9 ], "rows": [ [ 308534400000, 1 ], [ 308534400000, 2 ] ], "rowcount": 2, "duration": ... } The ``Array`` collection data type is displayed as a list where the first value is the collection type and the second is the inner type. The inner type could also be a collection. Example of JSON representation of a column list of (String, Integer[]):: "column_types": [ 4, [ 100, 9 ] ] .. _http-data-types-table: Available data types -------------------- IDs of all currently available data types: .. list-table:: :widths: 8 30 :header-rows: 1 * - ID - Data type * - 0 - :ref:`NULL ` * - 1 - Not supported * - 2 - :ref:`CHAR ` * - 3 - :ref:`BOOLEAN ` * - 4 - :ref:`TEXT ` * - 5 - :ref:`IP ` * - 6 - :ref:`DOUBLE PRECISION ` * - 7 - :ref:`REAL ` * - 8 - :ref:`SMALLINT ` * - 9 - :ref:`INTEGER ` * - 10 - :ref:`BIGINT ` * - 11 - :ref:`TIMESTAMP WITH TIME ZONE ` * - 12 - :ref:`OBJECT ` * - 13 - :ref:`GEO_POINT ` * - 14 - :ref:`GEO_SHAPE ` * - 15 - :ref:`TIMESTAMP WITHOUT TIME ZONE ` * - 16 - Unchecked object * - 17 - :ref:`INTERVAL ` * - 18 - :ref:`ROW ` * - 19 - :ref:`REGPROC ` * - 20 - :ref:`TIME ` * - 21 - :ref:`OIDVECTOR ` * - 22 - :ref:`NUMERIC ` * - 23 - :ref:`REGCLASS ` * - 24 - :ref:`DATE ` * - 25 - :ref:`BIT ` * - 26 - :ref:`JSON ` * - 27 - :ref:`CHARACTER ` * - 28 - :ref:`FLOAT VECTOR ` * - 100 - :ref:`ARRAY ` .. _http-error-handling: Error handling ============== Queries that are invalid or cannot be satisfied will result in an error response. The response will contain an error code, an error message and in some cases additional arguments that are specific to the error code. Client libraries should use the error code to translate the error into an appropriate exception:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' -d@- <<- EOF ... { ... "stmt":"select name, position from foo.locations" ... } ... EOF { "error": { "message": "SchemaUnknownException[Schema 'foo' unknown]", "code": 4045 } } To get more insight into what exactly went wrong an additional ``error_trace`` ``GET`` parameter can be specified to return the stack trace:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql?error_trace=true' -d@- <<- EOF ... { ... "stmt":"select name, position from foo.locations" ... } ... EOF { "error": { "message": "SchemaUnknownException[Schema 'foo' unknown]", "code": 4045 }, "error_trace": "..." } .. NOTE:: This parameter is intended for CrateDB developers or for users requesting support for CrateDB. Client libraries shouldn't make use of this option and not include the stack trace. .. _http-error-codes: Error codes ----------- ====== ===================================================================== Code Error ====== ===================================================================== 4000 The statement contains an invalid syntax or unsupported SQL statement ------ --------------------------------------------------------------------- 4001 The statement contains an invalid analyzer definition. ------ --------------------------------------------------------------------- 4002 The name of the relation is invalid. ------ --------------------------------------------------------------------- 4003 Field type validation failed ------ --------------------------------------------------------------------- 4004 Possible feature not supported (yet) ------ --------------------------------------------------------------------- 4005 Alter table using a table alias is not supported. ------ --------------------------------------------------------------------- 4006 The used column alias is ambiguous. ------ --------------------------------------------------------------------- 4007 The operation is not supported on this relation, as it is not accessible. ------ --------------------------------------------------------------------- 4008 The name of the column is invalid. ------ --------------------------------------------------------------------- 4009 CrateDB License is expired. (Deprecated.) ------ --------------------------------------------------------------------- 4010 User is not authorized to perform the SQL statement. ------ --------------------------------------------------------------------- 4011 Missing privilege for user. ------ --------------------------------------------------------------------- 4031 Only read operations are allowed on this node. ------ --------------------------------------------------------------------- 4041 Unknown relation. ------ --------------------------------------------------------------------- 4042 Unknown analyzer. ------ --------------------------------------------------------------------- 4043 Unknown column. ------ --------------------------------------------------------------------- 4044 Unknown type. ------ --------------------------------------------------------------------- 4045 Unknown schema. ------ --------------------------------------------------------------------- 4046 Unknown Partition. ------ --------------------------------------------------------------------- 4047 Unknown Repository. ------ --------------------------------------------------------------------- 4048 Unknown Snapshot. ------ --------------------------------------------------------------------- 4049 Unknown :ref:`user-defined function `. ------ --------------------------------------------------------------------- 40410 Unknown user. ------ --------------------------------------------------------------------- 40411 Document not found. ------ --------------------------------------------------------------------- 4091 A document with the same primary key exists already. ------ --------------------------------------------------------------------- 4092 A VersionConflict. Might be thrown if an attempt was made to update the same document concurrently. ------ --------------------------------------------------------------------- 4093 A relation with the same name exists already. ------ --------------------------------------------------------------------- 4094 The used table alias contains tables with different schema. ------ --------------------------------------------------------------------- 4095 A repository with the same name exists already. ------ --------------------------------------------------------------------- 4096 A snapshot with the same name already exists in the repository. ------ --------------------------------------------------------------------- 4097 A partition for the same values already exists in this table. ------ --------------------------------------------------------------------- 4098 A user-defined function with the same signature already exists. ------ --------------------------------------------------------------------- 4099 A user with the same name already exists. ------ --------------------------------------------------------------------- 4100 An object with the same name already exists. ------ --------------------------------------------------------------------- 5000 Unhandled server error. ------ --------------------------------------------------------------------- 5001 The execution of one or more tasks failed. ------ --------------------------------------------------------------------- 5002 One or more shards are not available. ------ --------------------------------------------------------------------- 5003 The query failed on one or more shards ------ --------------------------------------------------------------------- 5004 Creating a snapshot failed ------ --------------------------------------------------------------------- 5030 The query was killed by a ``kill`` statement ====== ===================================================================== .. _http-bulk-ops: Bulk operations =============== The HTTP endpoint supports executing a single SQL statement many times with different parameters. Instead of the ``args`` (:ref:`http-param-substitution`) key, use the key ``bulk_args``. This allows to specify a list of lists, containing all the parameters which shall be processed. The inner lists need to match the specified columns. The bulk response contains a ``results`` array, with a row count for each bulk operation. Those results are in the same order as the issued operations of the bulk operation. Here an example that inserts three records at once:: sh$ curl -sS -H 'Content-Type: application/json' \ ... -X POST '127.0.0.1:4200/_sql' -d@- <<- EOF ... { ... "stmt": "INSERT INTO locations (id, name, kind, description) ... VALUES (?, ?, ?, ?)", ... "bulk_args": [ ... [1337, "Earth", "Planet", "An awesome place to spend some time on."], ... [1338, "Sun", "Star", "An extraordinarily hot place."], ... [1339, "Titan", "Moon", "Titan, where it rains fossil fuels."] ... ] ... } ... EOF { "cols": [], "duration": ..., "results": [ { "rowcount": 1 }, { "rowcount": 1 }, { "rowcount": 1 } ] } Statements with a result set cannot be executed in bulk. The supported bulk SQL statements are: - Insert - Update - Delete .. _http-bulk-errors: Bulk errors ----------- There are two kinds of error behaviors for bulk requests: 1. **Analysis error:** Occurs if the statement is invalid, either due to syntax errors or semantic errors identified during the analysis phase before the execution starts. In this case the **whole** operation fails and you'll get a single error:: { "error": { "code": 4043, "message": "ColumnUnknownException[Column y unknown]" } } 2. **Runtime error:** For errors happening after the analysis phase succeeded during execution. For example on duplicate primary key errors or check constraint failures. In this case CrateDB continues processing the other bulk arguments and reports the results via a ``rowcount`` where ``-2`` indicates an error. Additionally, each failing bulk operation result element contains an ``error`` object with the related :ref:`code ` and ``message``:: { "cols": [], "duration": 2.195417, "results": [ { "rowcount": 1 }, { "rowcount": -2, "error": { "code": 4091, "message": "DuplicateKeyException[A document with the same primary key exists already]" } } ] } .. note:: To avoid too much memory pressure caused by errors, only the first ``10`` errors happening on each involved shard will contain an ``error`` payload. Any following error is exposed only by the ``-2`` row count without any details. .. note:: The ``error_trace`` option does not work with bulk operations. .. _here documents: https://en.wikipedia.org/wiki/Here_document .. _prepared statement: https://en.wikipedia.org/wiki/Prepared_statement.. _interface-postgresql: ======================== PostgreSQL wire protocol ======================== CrateDB supports the `PostgreSQL wire protocol v3`_. If a node is started with PostgreSQL wire protocol support enabled it will bind to port ``5432`` by default. To use a custom port, set the corresponding :ref:`conf_ports` in the :ref:`Configuration `. However, even though connecting PostgreSQL tools and client libraries is supported, the actual SQL statements have to be supported by CrateDB's SQL dialect. A notable difference is that CrateDB doesn't support transactions, which is why clients should generally enable ``autocommit``. .. NOTE:: In order to use ``setFetchSize`` in JDBC it is possible to set auto commit to false. The client will utilize the fetchSize on SELECT statements and only load up to fetchSize rows into memory. See the `PostgreSQL JDBC Query docs`_ for more information. Write operations will still behave as if auto commit was enabled and commit or rollback calls are ignored. .. rubric:: Table of contents .. contents:: :local: .. _postgres-server-compat: Server compatibility ==================== CrateDB emulates PostgreSQL server version ``14``. .. _postgres-start-up: Start-up -------- .. _postgres-ssl: SSL Support ''''''''''' SSL can be configured using :ref:`admin_ssl`. .. _postgres-auth: Authentication '''''''''''''' Authentication methods can be configured using :ref:`admin_hba`. .. _postgres-parameterstatus: ParameterStatus ''''''''''''''' After the authentication succeeded, the server has the possibility to send multiple ``ParameterStatus`` messages to the client. These are used to communicate information like ``server_version`` (emulates PostgreSQL 9.5) or ``server_encoding``. ``CrateDB`` also sends a message containing the ``crate_version`` parameter. This contains the current ``CrateDB`` version number. This information is useful for clients to detect that they're connecting to ``CrateDB`` instead of a PostgreSQL instance. .. _postgres-db-selection: Database selection '''''''''''''''''' Since CrateDB uses schemas instead of databases, the ``database`` parameter sets the default schema name for future queries. If no schema is specified, the schema ``doc`` will be used as default. Additionally, the only supported charset is ``UTF8``. .. _postgres-query-modes: Query modes ----------- .. _postgres-query-modes-simple: Simple query '''''''''''' The `PostgreSQL simple query`_ protocol mode is fully implemented. .. _postgres-query-modes-extended: Extended query '''''''''''''' The `PostgreSQL extended query`_ protocol mode is implemented with the following limitations: - The ``ParameterDescription`` message works for the most common use cases except for DDL statements. - To optimize the execution of bulk operations the execution of statements is delayed until the ``Sync`` message is received .. _postgres-copy-na: Copy operations --------------- CrateDB does not support the ``COPY`` sub-protocol, see also :ref:`postgres-copy`. .. _postgres-fn-call: Function call ------------- The :ref:`function call ` sub-protocol is not supported since it's a legacy feature. .. _postgres-cancel-reqs: Canceling requests ------------------ `PostgreSQL cancelling requests`_ is fully implemented. .. _postgres-pg_catalog: ``pg_catalog`` -------------- For improved compatibility, the ``pg_catalog`` schema is implemented containing following tables: - `pg_am`_ - `pg_attrdef `__ - `pg_attribute `__ - `pg_class `__ - `pg_constraint `__ - `pg_cursors `__ - `pg_database `__ - `pg_depend`_ - `pg_description`_ - `pg_enum`_ - `pg_event_trigger`_ - `pg_index `__ - `pg_indexes `__ - `pg_locks `__ - `pg_matviews `__ - `pg_namespace `__ - `pg_proc `__ - `pg_publication `__ - `pg_publication_tables `__ - `pg_range`_ - `pg_roles`_ - `pg_settings `__ - `pg_shdescription`_ - `pg_stats`_ - `pg_subscription `__ - `pg_subscription_rel `__ - `pg_tables`_ - `pg_tablespace`_ - `pg_type`_ - `pg_views`_ .. _postgres-pg_type: ``pg_type`` ''''''''''' Some clients require the ``pg_catalog.pg_type`` in order to be able to stream arrays or other non-primitive types. For compatibility reasons, there is a trimmed down `pg_type `__ table available in CrateDB:: cr> SELECT oid, typname, typarray, typelem, typlen, typtype, typcategory ... FROM pg_catalog.pg_type ... ORDER BY oid; +------+--------------+----------+---------+--------+---------+-------------+ | oid | typname | typarray | typelem | typlen | typtype | typcategory | +------+--------------+----------+---------+--------+---------+-------------+ | 16 | bool | 1000 | 0 | 1 | b | N | | 18 | char | 1002 | 0 | 1 | b | S | | 19 | name | -1 | 0 | 64 | b | S | | 20 | int8 | 1016 | 0 | 8 | b | N | | 21 | int2 | 1005 | 0 | 2 | b | N | | 23 | int4 | 1007 | 0 | 4 | b | N | | 24 | regproc | 1008 | 0 | 4 | b | N | | 25 | text | 1009 | 0 | -1 | b | S | | 26 | oid | 1028 | 0 | 4 | b | N | | 30 | oidvector | 1013 | 26 | -1 | b | A | | 114 | json | 199 | 0 | -1 | b | U | | 199 | _json | 0 | 114 | -1 | b | A | | 600 | point | 1017 | 0 | 16 | b | G | | 700 | float4 | 1021 | 0 | 4 | b | N | | 701 | float8 | 1022 | 0 | 8 | b | N | | 705 | unknown | 0 | 0 | -2 | p | X | | 1000 | _bool | 0 | 16 | -1 | b | A | | 1002 | _char | 0 | 18 | -1 | b | A | | 1005 | _int2 | 0 | 21 | -1 | b | A | | 1007 | _int4 | 0 | 23 | -1 | b | A | | 1008 | _regproc | 0 | 24 | -1 | b | A | | 1009 | _text | 0 | 25 | -1 | b | A | | 1014 | _bpchar | 0 | 1042 | -1 | b | A | | 1015 | _varchar | 0 | 1043 | -1 | b | A | | 1016 | _int8 | 0 | 20 | -1 | b | A | | 1017 | _point | 0 | 600 | -1 | b | A | | 1021 | _float4 | 0 | 700 | -1 | b | A | | 1022 | _float8 | 0 | 701 | -1 | b | A | | 1042 | bpchar | 1014 | 0 | -1 | b | S | | 1043 | varchar | 1015 | 0 | -1 | b | S | | 1082 | date | 1182 | 0 | 8 | b | D | | 1114 | timestamp | 1115 | 0 | 8 | b | D | | 1115 | _timestamp | 0 | 1114 | -1 | b | A | | 1182 | _date | 0 | 1082 | -1 | b | A | | 1184 | timestamptz | 1185 | 0 | 8 | b | D | | 1185 | _timestamptz | 0 | 1184 | -1 | b | A | | 1186 | interval | 1187 | 0 | 16 | b | T | | 1187 | _interval | 0 | 1186 | -1 | b | A | | 1231 | _numeric | 0 | 1700 | -1 | b | A | | 1266 | timetz | 1270 | 0 | 12 | b | D | | 1270 | _timetz | 0 | 1266 | -1 | b | A | | 1560 | bit | 1561 | 0 | -1 | b | V | | 1561 | _bit | 0 | 1560 | -1 | b | A | | 1700 | numeric | 1231 | 0 | -1 | b | N | | 2205 | regclass | 2210 | 0 | 4 | b | N | | 2210 | _regclass | 0 | 2205 | -1 | b | A | | 2249 | record | 2287 | 0 | -1 | p | P | | 2276 | any | 0 | 0 | 4 | p | P | | 2277 | anyarray | 0 | 2276 | -1 | p | P | | 2287 | _record | 0 | 2249 | -1 | p | A | +------+--------------+----------+---------+--------+---------+-------------+ SELECT 50 rows in set (... sec) .. NOTE:: This is just a snapshot of the table. Check table :ref:`information_schema.columns ` to get information for all supported columns. .. _postgres-pg_type-oid: OID types ......... *Object Identifiers* (OIDs) are used internally by PostgreSQL as primary keys for various system tables. CrateDB supports the :ref:`oid ` type and the following aliases: +-------------------+----------------------+-------------+-------------+ | Name | Reference | Description | Example | +===================+======================+=============+=============+ | :ref:`regproc | `pg_proc | A function | ``sum`` | | ` | `__ | name | | +-------------------+----------------------+-------------+-------------+ | :ref:`regclass | `pg_class | A relation | ``pg_type`` | | ` | `__ | name | | +-------------------+----------------------+-------------+-------------+ CrateDB also supports the :ref:`oidvector ` type. .. NOTE:: Casting a :ref:`string ` or an :ref:`integer ` to the ``regproc`` type does not result in a function lookup (as it does with PostgreSQL). Instead: .. rst-class:: open - Casting a string to the ``regproc`` type results in an object of the ``regproc`` type with a name equal to the string value and an ``oid`` equal to an integer hash of the string. - Casting an integer to the ``regproc`` type results in an object of the ``regproc`` type with a name equal to the string representation of the integer and an ``oid`` equal to the integer value. Consult the :ref:`CrateDB data types reference ` for more information about each OID type (including additional type casting behaviour). .. _postgres-show-trans-isolation: Show transaction isolation -------------------------- For compatibility with JDBC the ``SHOW TRANSACTION ISOLATION LEVEL`` statement is implemented:: cr> show transaction isolation level; +-----------------------+ | transaction_isolation | +-----------------------+ | read uncommitted | +-----------------------+ SHOW 1 row in set (... sec) .. _postgres-begin-start-comit: ``BEGIN``, ``START``, and ``COMMIT`` statements ----------------------------------------------- For compatibility with clients that use the PostgresSQL wire protocol (e.g., the Golang lib/pq and pgx drivers), CrateDB will accept the :ref:`BEGIN `, :ref:`COMMIT `, and :ref:`START TRANSACTION ` statements. For example:: cr> BEGIN TRANSACTION ISOLATION LEVEL READ UNCOMMITTED, ... READ ONLY, ... NOT DEFERRABLE; BEGIN OK, 0 rows affected (... sec) cr> COMMIT COMMIT OK, 0 rows affected (... sec) CrateDB will silently ignore the ``COMMIT``, ``BEGIN``, and ``START TRANSACTION`` statements and all respective parameters. .. _postgres-client-compat: Client compatibility ==================== .. _postgres-client-jdbc: JDBC ---- `pgjdbc`_ JDBC drivers version ``9.4.1209`` and above are compatible. .. _postgres-client-jdbc-limit: Limitations ''''''''''' - *Reflection* methods like ``conn.getMetaData().getTables(...)`` won't work since the required tables are unavailable in CrateDB. As a workaround it's possible to use ``SHOW TABLES`` or query the ``information_schema`` tables manually using ``SELECT`` statements. - ``OBJECT`` and ``GEO_SHAPE`` columns can be streamed as ``JSON`` but require `pgjdbc`_ version ``9.4.1210`` or newer. - Multidimensional arrays will be streamed as ``JSON`` encoded string to avoid a protocol limitation where all sub-arrays are required to have the same length. - The behavior of ``PreparedStatement.executeBatch`` in error cases depends on in which stage an error occurs: A ``BatchUpdateException`` is thrown if no processing has been done yet, whereas single operations failing after the processing started are indicated by an ``EXECUTE_FAILED`` (-3) return value. - Transaction limitations as described above. - Having ``escape processing`` enabled could prevent the usage of :ref:`Object Literals ` in case an object key's starting character clashes with a JDBC escape keyword (see also `JDBC escape syntax `_). Disabling ``escape processing`` will remedy this appropriately for `pgjdbc`_ version >= ``9.4.1212``. .. _postgres-client-jdbc-conn: Connection failover and load balancing '''''''''''''''''''''''''''''''''''''' Connection failover and load balancing is supported as described here: `PostgreSQL JDBC connection failover`_. .. NOTE:: It is not recommended to use the **targetServerType** parameter since CrateDB has no concept of master-replica nodes. .. _postgres-implementation: Implementation differences ========================== The PostgreSQL Wire Protocol makes it easy to use many PostgreSQL compatible tools and libraries directly with CrateDB. However, many of these tools assume that they are talking to PostgreSQL specifically, and thus rely on SQL extensions and idioms that are unique to PostgreSQL. Because of this, some tools or libraries may not work with other SQL databases such as CrateDB. CrateDB's SQL query engine enables real-time search & aggregations for online analytic processing (OLAP) and business intelligence (BI) with the benefit of the ability to scale horizontally. The use-cases of CrateDB are different than those of PostgreSQL, as CrateDB's specialized storage schema and query execution engine addresses different needs (see :ref:`Clustering `). The features listed below cover the main differences in implementation and dialect between CrateDB and PostgreSQL. A detailed comparison between CrateDB's SQL dialect and standard SQL is outlined in :ref:`appendix-compatibility`. .. _postgres-copy: Copy operations --------------- CrateDB does not support the distinct sub-protocol that is used to serve ``COPY`` operations and provides another implementation for transferring bulk data using the :ref:`sql-copy-from` and :ref:`sql-copy-to` statements. .. _postgres-types: Data types ---------- .. _postgres-date-times: Dates and times ''''''''''''''' At the moment, CrateDB does not support ``TIME`` without a time zone. Additionally, CrateDB does not support the ``INTERVAL`` input units ``MILLENNIUM``, ``CENTURY``, ``DECADE``, or ``MICROSECOND``. .. _postgres-objects: Objects ''''''' The definition of structured values by using ``JSON`` types, *composite types* or ``HSTORE`` are not supported. CrateDB alternatively allows the definition of nested documents (of type :ref:`type-object`) that store fields containing any CrateDB supported data type, including nested object types. .. _postgres-arrays: Arrays '''''' .. _postgres-arrays-declare: Declaration of arrays ..................... While multidimensional arrays in PostgreSQL must have matching extends for each dimension, CrateDB allows different length nested arrays as this example shows:: cr> select [[1,2,3],[1,2]] from sys.cluster; +---------------------+ | [[1, 2, 3], [1, 2]] | +---------------------+ | [[1, 2, 3], [1, 2]] | +---------------------+ SELECT 1 row in set (... sec) .. _postgres-type-casts: Type casts '''''''''' CrateDB accepts the :ref:`data-types-casting` syntax for conversion of one data type to another. .. SEEALSO:: `PostgreSQL value expressions`_ :ref:`CrateDB value expressions ` .. _postgres-search: Text search functions and operators ----------------------------------- The :ref:`functions ` and :ref:`operators ` provided by PostgreSQL for :ref:`full-text search ` (see `PostgreSQL fulltext Search`_) are not compatible with those provided by CrateDB. If you are missing features, functions or dialect improvements and have a great use case for it, let us know on `GitHub`_. We're always improving and extending CrateDB and we love to hear feedback. .. _GitHub: https://github.com/crate/crate .. _pg_am: https://www.postgresql.org/docs/14/catalog-pg-am.html .. _pg_description: https://www.postgresql.org/docs/14/catalog-pg-description.html .. _pg_enum: https://www.postgresql.org/docs/14/catalog-pg-enum.html .. _pg_range: https://www.postgresql.org/docs/14/catalog-pg-range.html .. _pg_roles: https://www.postgresql.org/docs/14/view-pg-roles.html .. _pg_tables: https://www.postgresql.org/docs/14/view-pg-tables.html .. _pg_tablespace: https://www.postgresql.org/docs/14/catalog-pg-tablespace.html .. _pg_views: https://www.postgresql.org/docs/14/view-pg-views.html .. _pg_shdescription: https://www.postgresql.org/docs/14/catalog-pg-shdescription.html .. _pg_stats: https://www.postgresql.org/docs/14/view-pg-stats.html .. _pg_event_trigger: https://www.postgresql.org/docs/current/catalog-pg-event-trigger.html .. _pg_depend: https://www.postgresql.org/docs/current/catalog-pg-depend.html .. _pgjdbc: https://github.com/pgjdbc/pgjdbc .. _pgsql_pg_attrdef: https://www.postgresql.org/docs/14/static/catalog-pg-attrdef.html .. _pgsql_pg_attribute: https://www.postgresql.org/docs/14/static/catalog-pg-attribute.html .. _pgsql_pg_class: https://www.postgresql.org/docs/14/static/catalog-pg-class.html .. _pgsql_pg_constraint: https://www.postgresql.org/docs/14/static/catalog-pg-constraint.html .. _pgsql_pg_cursors: https://www.postgresql.org/docs/15/view-pg-cursors.html .. _pgsql_pg_database: https://www.postgresql.org/docs/14/static/catalog-pg-database.html .. _pgsql_pg_index: https://www.postgresql.org/docs/14/static/catalog-pg-index.html .. _pgsql_pg_indexes: https://www.postgresql.org/docs/14/view-pg-indexes.html .. _pgsql_pg_locks: https://www.postgresql.org/docs/14/view-pg-locks.html .. _pgsql_pg_matviews: https://www.postgresql.org/docs/current/view-pg-matviews.html .. _pgsql_pg_namespace: https://www.postgresql.org/docs/14/static/catalog-pg-namespace.html .. _pgsql_pg_proc: https://www.postgresql.org/docs/14/static/catalog-pg-proc.html .. _pgsql_pg_publication: https://www.postgresql.org/docs/14/catalog-pg-publication.html .. _pgsql_pg_publication_tables: https://www.postgresql.org/docs/14/view-pg-publication-tables.html .. _pgsql_pg_subscription: https://www.postgresql.org/docs/14/catalog-pg-subscription.html .. _pgsql_pg_subscription_rel: https://www.postgresql.org/docs/14/catalog-pg-subscription-rel.html .. _pgsql_pg_settings: https://www.postgresql.org/docs/14/view-pg-settings.html .. _pgsql_pg_type: https://www.postgresql.org/docs/14/static/catalog-pg-type.html .. _PostgreSQL Arrays: https://www.postgresql.org/docs/14/static/arrays.html .. _PostgreSQL extended query: https://www.postgresql.org/docs/14/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY .. _PostgreSQL Fulltext Search: https://www.postgresql.org/docs/14/static/functions-textsearch.html .. _PostgreSQL JDBC connection failover: https://jdbc.postgresql.org/documentation/use/#connection-fail-over .. _PostgreSQL JDBC Query docs: https://jdbc.postgresql.org/documentation/query .. _PostgreSQL simple query: https://www.postgresql.org/docs/14/static/protocol-flow.html#id-1.10.5.7.4 .. _PostgreSQL value expressions: https://www.postgresql.org/docs/14/static/sql-expressions.html .. _PostgreSQL wire protocol v3: https://www.postgresql.org/docs/14/static/protocol.html .. _PostgreSQL cancelling requests: https://www.postgresql.org/docs/14/protocol-flow.html#id-1.10.5.7.10.. highlight:: psql .. _information_schema: ================== Information schema ================== ``information_schema`` is a special schema that contains virtual tables which are read-only and can be queried to get information about the state of the cluster. .. rubric:: Table of contents .. contents:: :local: Access ====== When the user management is enabled, accessing the ``information_schema`` is open to all users and it does not require any privileges. However, being able to query ``information_schema`` tables will not allow the user to retrieve all the rows in the table, as it can contain information related to tables over which the connected user does not have any privileges. The only rows that will be returned will be the ones the user is allowed to access. For example, if the user ``john`` has any privilege on the ``doc.books`` table but no privilege at all on ``doc.locations``, when ``john`` issues a ``SELECT * FROM information_schema.tables`` statement, the tables information related to the ``doc.locations`` table will not be returned. Virtual tables ============== .. _information_schema_tables: ``tables`` ---------- The ``information_schema.tables`` virtual table can be queried to get a list of all available tables and views and their settings, such as number of shards or number of replicas. .. hide: CREATE VIEW:: cr> CREATE VIEW galaxies AS ... SELECT id, name, description FROM locations WHERE kind = 'Galaxy'; CREATE OK, 1 row affected (... sec) .. hide: CREATE TABLE:: cr> create table partitioned_table ( ... id bigint, ... title text, ... date timestamp with time zone ... ) partitioned by (date); CREATE OK, 1 row affected (... sec) :: cr> SELECT table_schema, table_name, table_type, number_of_shards, number_of_replicas ... FROM information_schema.tables ... ORDER BY table_schema ASC, table_name ASC; +--------------------+-------------------------+------------+------------------+--------------------+ | table_schema | table_name | table_type | number_of_shards | number_of_replicas | +--------------------+-------------------------+------------+------------------+--------------------+ | doc | galaxies | VIEW | NULL | NULL | | doc | locations | BASE TABLE | 2 | 0 | | doc | partitioned_table | BASE TABLE | 4 | 0-1 | | doc | quotes | BASE TABLE | 2 | 0 | | information_schema | character_sets | BASE TABLE | NULL | NULL | | information_schema | columns | BASE TABLE | NULL | NULL | | information_schema | foreign_server_options | BASE TABLE | NULL | NULL | | information_schema | foreign_servers | BASE TABLE | NULL | NULL | | information_schema | foreign_table_options | BASE TABLE | NULL | NULL | | information_schema | foreign_tables | BASE TABLE | NULL | NULL | | information_schema | key_column_usage | BASE TABLE | NULL | NULL | | information_schema | referential_constraints | BASE TABLE | NULL | NULL | | information_schema | routines | BASE TABLE | NULL | NULL | | information_schema | schemata | BASE TABLE | NULL | NULL | | information_schema | sql_features | BASE TABLE | NULL | NULL | | information_schema | table_constraints | BASE TABLE | NULL | NULL | | information_schema | table_partitions | BASE TABLE | NULL | NULL | | information_schema | tables | BASE TABLE | NULL | NULL | | information_schema | user_mapping_options | BASE TABLE | NULL | NULL | | information_schema | user_mappings | BASE TABLE | NULL | NULL | | information_schema | views | BASE TABLE | NULL | NULL | | pg_catalog | pg_am | BASE TABLE | NULL | NULL | | pg_catalog | pg_attrdef | BASE TABLE | NULL | NULL | | pg_catalog | pg_attribute | BASE TABLE | NULL | NULL | | pg_catalog | pg_class | BASE TABLE | NULL | NULL | | pg_catalog | pg_constraint | BASE TABLE | NULL | NULL | | pg_catalog | pg_cursors | BASE TABLE | NULL | NULL | | pg_catalog | pg_database | BASE TABLE | NULL | NULL | | pg_catalog | pg_depend | BASE TABLE | NULL | NULL | | pg_catalog | pg_description | BASE TABLE | NULL | NULL | | pg_catalog | pg_enum | BASE TABLE | NULL | NULL | | pg_catalog | pg_event_trigger | BASE TABLE | NULL | NULL | | pg_catalog | pg_index | BASE TABLE | NULL | NULL | | pg_catalog | pg_indexes | BASE TABLE | NULL | NULL | | pg_catalog | pg_locks | BASE TABLE | NULL | NULL | | pg_catalog | pg_matviews | BASE TABLE | NULL | NULL | | pg_catalog | pg_namespace | BASE TABLE | NULL | NULL | | pg_catalog | pg_proc | BASE TABLE | NULL | NULL | | pg_catalog | pg_publication | BASE TABLE | NULL | NULL | | pg_catalog | pg_publication_tables | BASE TABLE | NULL | NULL | | pg_catalog | pg_range | BASE TABLE | NULL | NULL | | pg_catalog | pg_roles | BASE TABLE | NULL | NULL | | pg_catalog | pg_settings | BASE TABLE | NULL | NULL | | pg_catalog | pg_shdescription | BASE TABLE | NULL | NULL | | pg_catalog | pg_stats | BASE TABLE | NULL | NULL | | pg_catalog | pg_subscription | BASE TABLE | NULL | NULL | | pg_catalog | pg_subscription_rel | BASE TABLE | NULL | NULL | | pg_catalog | pg_tables | BASE TABLE | NULL | NULL | | pg_catalog | pg_tablespace | BASE TABLE | NULL | NULL | | pg_catalog | pg_type | BASE TABLE | NULL | NULL | | pg_catalog | pg_views | BASE TABLE | NULL | NULL | | sys | allocations | BASE TABLE | NULL | NULL | | sys | checks | BASE TABLE | NULL | NULL | | sys | cluster | BASE TABLE | NULL | NULL | | sys | health | BASE TABLE | NULL | NULL | | sys | jobs | BASE TABLE | NULL | NULL | | sys | jobs_log | BASE TABLE | NULL | NULL | | sys | jobs_metrics | BASE TABLE | NULL | NULL | | sys | node_checks | BASE TABLE | NULL | NULL | | sys | nodes | BASE TABLE | NULL | NULL | | sys | operations | BASE TABLE | NULL | NULL | | sys | operations_log | BASE TABLE | NULL | NULL | | sys | privileges | BASE TABLE | NULL | NULL | | sys | repositories | BASE TABLE | NULL | NULL | | sys | roles | BASE TABLE | NULL | NULL | | sys | segments | BASE TABLE | NULL | NULL | | sys | sessions | BASE TABLE | NULL | NULL | | sys | shards | BASE TABLE | NULL | NULL | | sys | snapshot_restore | BASE TABLE | NULL | NULL | | sys | snapshots | BASE TABLE | NULL | NULL | | sys | summits | BASE TABLE | NULL | NULL | | sys | users | BASE TABLE | NULL | NULL | +--------------------+-------------------------+------------+------------------+--------------------+ SELECT 72 rows in set (... sec) The table also contains additional information such as the specified :ref:`routing column ` and :ref:`partition columns `:: cr> SELECT table_name, clustered_by, partitioned_by ... FROM information_schema.tables ... WHERE table_schema = 'doc' ... ORDER BY table_schema ASC, table_name ASC; +-------------------+--------------+----------------+ | table_name | clustered_by | partitioned_by | +-------------------+--------------+----------------+ | galaxies | NULL | NULL | | locations | id | NULL | | partitioned_table | _id | ["date"] | | quotes | id | NULL | +-------------------+--------------+----------------+ SELECT 4 rows in set (... sec) .. rubric:: Schema +----------------------------------+------------------------------------------------------------------------------------+-------------+ | Name | Description | Data Type | +==================================+====================================================================================+=============+ | ``blobs_path`` | The data path of the blob table | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``closed`` | The state of the table | ``BOOLEAN`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``clustered_by`` | The :ref:`routing column ` used to cluster the table | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``column_policy`` | Defines whether the table uses a ``STRICT`` or a ``DYNAMIC`` :ref:`column_policy` | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``number_of_replicas`` | The number of replicas the table currently has | ``INTEGER`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``number_of_shards`` | The number of shards the table is currently distributed across | ``INTEGER`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``partitioned_by`` | The :ref:`partition columns ` (used to partition the | ``TEXT`` | | | table) | | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``reference_generation`` | Specifies how values in the self-referencing column are generated | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``routing_hash_function`` | The name of the hash function used for internal :ref:`routing ` | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``self_referencing_column_name`` | The name of the column that uniquely identifies each row (always ``_id``) | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``settings`` | :ref:`sql-create-table-with` | ``OBJECT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``table_catalog`` | Refers to the ``table_schema`` | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``table_name`` | The name of the table | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``table_schema`` | The name of the schema the table belongs to | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``table_type`` | The type of the table (``BASE TABLE`` for tables, ``VIEW`` for views) | ``TEXT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ | ``version`` | A collection of version numbers relevant to the table | ``OBJECT`` | +----------------------------------+------------------------------------------------------------------------------------+-------------+ ``settings`` ............ Table settings specify configuration parameters for tables. Some settings can be set during Cluster runtime and others are only applied on cluster restart. This list of table settings in :ref:`sql-create-table-with` shows detailed information of each parameter. Table parameters can be applied with ``CREATE TABLE`` on creation of a table. With ``ALTER TABLE`` they can be set on already existing tables. The following statement creates a new table and sets the refresh interval of shards to 500 ms and sets the :ref:`shard allocation ` for primary shards only:: cr> create table parameterized_table (id integer, content text) ... with ("refresh_interval"=500, "routing.allocation.enable"='primaries'); CREATE OK, 1 row affected (... sec) The settings can be verified by querying ``information_schema.tables``:: cr> select settings['routing']['allocation']['enable'] as alloc_enable, ... settings['refresh_interval'] as refresh_interval ... from information_schema.tables ... where table_name='parameterized_table'; +--------------+------------------+ | alloc_enable | refresh_interval | +--------------+------------------+ | primaries | 500 | +--------------+------------------+ SELECT 1 row in set (... sec) On existing tables this needs to be done with ``ALTER TABLE`` statement:: cr> alter table parameterized_table ... set ("routing.allocation.enable"='none'); ALTER OK, -1 rows affected (... sec) .. hide: cr> drop table parameterized_table; DROP OK, 1 row affected (... sec) ``views`` --------- The table ``information_schema.views`` contains the name, definition and options of all available views. :: cr> SELECT table_schema, table_name, view_definition ... FROM information_schema.views ... ORDER BY table_schema ASC, table_name ASC; +--------------+------------+-------------------------+ | table_schema | table_name | view_definition | +--------------+------------+-------------------------+ | doc | galaxies | SELECT | | | | "id" | | | | , "name" | | | | , "description" | | | | FROM "locations" | | | | WHERE "kind" = 'Galaxy' | +--------------+------------+-------------------------+ SELECT 1 row in set (... sec) .. rubric:: Schema +---------------------+-------------------------------------------------------------------------------------+-------------+ | Name | Description | Data Type | +=====================+=====================================================================================+=============+ | ``table_catalog`` | The catalog of the table of the view (refers to ``table_schema``) | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``table_schema`` | The schema of the table of the view | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``table_name`` | The name of the table of the view | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``view_definition`` | The SELECT statement that defines the view | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``check_option`` | Not applicable for CrateDB, always return ``NONE`` | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``is_updatable`` | Whether the view is updatable. Not applicable for CrateDB, always returns ``FALSE`` | ``BOOLEAN`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ | ``owner`` | The user that created the view | ``TEXT`` | +---------------------+-------------------------------------------------------------------------------------+-------------+ .. note:: If you drop the table of a view, the view will still exist and show up in the ``information_schema.tables`` and ``information_schema.views`` tables. .. hide: cr> DROP view galaxies; DROP OK, 1 row affected (... sec) .. _information_schema_columns: ``columns`` ----------- This table can be queried to get a list of all available columns of all tables and views and their definition like data type and ordinal position inside the table:: cr> select table_name, column_name, ordinal_position as pos, data_type ... from information_schema.columns ... where table_schema = 'doc' and table_name not like 'my_table%' ... order by table_name asc, column_name asc; +-------------------+--------------------------------+-----+--------------------------+ | table_name | column_name | pos | data_type | +-------------------+--------------------------------+-----+--------------------------+ | locations | date | 3 | timestamp with time zone | | locations | description | 6 | text | | locations | id | 1 | integer | | locations | information | 11 | object_array | | locations | information['evolution_level'] | 13 | smallint | | locations | information['population'] | 12 | bigint | | locations | inhabitants | 7 | object | | locations | inhabitants['description'] | 9 | text | | locations | inhabitants['interests'] | 8 | text_array | | locations | inhabitants['name'] | 10 | text | | locations | kind | 4 | text | | locations | landmarks | 14 | text_array | | locations | name | 2 | text | | locations | position | 5 | integer | | partitioned_table | date | 3 | timestamp with time zone | | partitioned_table | id | 1 | bigint | | partitioned_table | title | 2 | text | | quotes | id | 1 | integer | | quotes | quote | 2 | text | +-------------------+--------------------------------+-----+--------------------------+ SELECT 19 rows in set (... sec) You can even query this table's own columns (attention: this might lead to infinite recursion of your mind, beware!):: cr> select column_name, data_type, ordinal_position ... from information_schema.columns ... where table_schema = 'information_schema' ... and table_name = 'columns' order by column_name asc; +--------------------------+------------+------------------+ | column_name | data_type | ordinal_position | +--------------------------+------------+------------------+ | character_maximum_length | integer | 1 | | character_octet_length | integer | 2 | | character_set_catalog | text | 3 | | character_set_name | text | 4 | | character_set_schema | text | 5 | | check_action | integer | 6 | | check_references | text | 7 | | collation_catalog | text | 8 | | collation_name | text | 9 | | collation_schema | text | 10 | | column_default | text | 11 | | column_details | object | 12 | | column_details['name'] | text | 13 | | column_details['path'] | text_array | 14 | | column_details['policy'] | text | 15 | | column_name | text | 16 | | data_type | text | 17 | | datetime_precision | integer | 18 | | domain_catalog | text | 19 | | domain_name | text | 20 | | domain_schema | text | 21 | | generation_expression | text | 22 | | identity_cycle | boolean | 23 | | identity_generation | text | 24 | | identity_increment | text | 25 | | identity_maximum | text | 26 | | identity_minimum | text | 27 | | identity_start | text | 28 | | interval_precision | integer | 29 | | interval_type | text | 30 | | is_generated | text | 31 | | is_identity | boolean | 32 | | is_nullable | boolean | 33 | | numeric_precision | integer | 34 | | numeric_precision_radix | integer | 35 | | numeric_scale | integer | 36 | | ordinal_position | integer | 37 | | table_catalog | text | 38 | | table_name | text | 39 | | table_schema | text | 40 | | udt_catalog | text | 41 | | udt_name | text | 42 | | udt_schema | text | 43 | +--------------------------+------------+------------------+ SELECT 43 rows in set (... sec) .. rubric:: Schema +-------------------------------+-----------------------------------------------+---------------+ | Name | Description | Data Type | +===============================+===============================================+===============+ | ``table_catalog`` | Refers to the ``table_schema`` | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``table_schema`` | Schema name containing the table | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``table_name`` | Table Name | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``column_name`` | Column Name | ``TEXT`` | | | For fields in object columns this is not an | | | | identifier but a path and therefore must not | | | | be double quoted when programmatically | | | | obtained. | | +-------------------------------+-----------------------------------------------+---------------+ | ``ordinal_position`` | The position of the column within the | ``INTEGER`` | | | table | | +-------------------------------+-----------------------------------------------+---------------+ | ``is_nullable`` | Whether the column is nullable | ``BOOLEAN`` | +-------------------------------+-----------------------------------------------+---------------+ | ``data_type`` | The data type of the column | ``TEXT`` | | | | | | | For further information see :ref:`data-types` | | +-------------------------------+-----------------------------------------------+---------------+ | ``column_default`` | The default :ref:`expression | ``TEXT`` | | | ` of the column | | +-------------------------------+-----------------------------------------------+---------------+ | ``character_maximum_length`` | If the data type is a :ref:`character type | ``INTEGER`` | | | ` then return the | | | | declared length limit; otherwise ``NULL``. | | +-------------------------------+-----------------------------------------------+---------------+ | ``character_octet_length`` | Not implemented (always returns ``NULL``) | ``INTEGER`` | | | | | | | Please refer to :ref:`type-text` type | | +-------------------------------+-----------------------------------------------+---------------+ | ``numeric_precision`` | Indicates the number of significant digits | ``INTEGER`` | | | for a numeric ``data_type``. For all other | | | | data types this column is ``NULL``. | | +-------------------------------+-----------------------------------------------+---------------+ | ``numeric_precision_radix`` | Indicates in which base the value in the | ``INTEGER`` | | | column ``numeric_precision`` for a numeric | | | | ``data_type`` is exposed. This can either be | | | | 2 (binary) or 10 (decimal). For all other | | | | data types this column is ``NULL``. | | +-------------------------------+-----------------------------------------------+---------------+ | ``numeric_scale`` | Not implemented (always returns ``NULL``) | ``INTEGER`` | +-------------------------------+-----------------------------------------------+---------------+ | ``datetime_precision`` | Contains the fractional seconds precision for | ``INTEGER`` | | | a ``timestamp`` ``data_type``. For all other | | | | data types this column is ``null``. | | +-------------------------------+-----------------------------------------------+---------------+ | ``interval_type`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``interval_precision`` | Not implemented (always returns ``NULL``) | ``INTEGER`` | +-------------------------------+-----------------------------------------------+---------------+ | ``character_set_catalog`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``character_set_schema`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``character_set_name`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``collation_catalog`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``collation_schema`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``collation_name`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``domain_catalog`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``domain_schema`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``domain_name`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``udt_catalog`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``udt_schema`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``udt_name`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``check_references`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``check_action`` | Not implemented (always returns ``NULL``) | ``INTEGER`` | +-------------------------------+-----------------------------------------------+---------------+ | ``generation_expression`` | The expression used to generate ad column. | ``TEXT`` | | | If the column is not generated ``NULL`` is | | | | returned. | | +-------------------------------+-----------------------------------------------+---------------+ | ``is_generated`` | Returns ``ALWAYS`` or ``NEVER`` wether the | ``TEXT`` | | | column is generated or not. | | +-------------------------------+-----------------------------------------------+---------------+ | ``is_identity`` | Not implemented (always returns ``false``) | ``BOOLEAN`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_cycle`` | Not implemented (always returns ``NULL``) | ``BOOLEAN`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_generation`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_increment`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_maximum`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_minimum`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ | ``identity_start`` | Not implemented (always returns ``NULL``) | ``TEXT`` | +-------------------------------+-----------------------------------------------+---------------+ .. _information_schema_table_constraints: ``table_constraints`` --------------------- This table can be queried to get a list of all defined table constraints, their type, name and which table they are defined in. .. NOTE:: Currently only ``PRIMARY_KEY`` constraints are supported. .. hide: cr> create table tbl (col TEXT NOT NULL); CREATE OK, 1 row affected (... sec) :: cr> select table_schema, table_name, constraint_name, constraint_type as type ... from information_schema.table_constraints ... where table_name = 'tables' ... or table_name = 'quotes' ... or table_name = 'documents' ... or table_name = 'tbl' ... order by table_schema desc, table_name asc limit 10; +--------------------+------------+------------------------+-------------+ | table_schema | table_name | constraint_name | type | +--------------------+------------+------------------------+-------------+ | information_schema | tables | tables_pkey | PRIMARY KEY | | doc | quotes | quotes_pkey | PRIMARY KEY | | doc | quotes | doc_quotes_id_not_null | CHECK | | doc | tbl | doc_tbl_col_not_null | CHECK | +--------------------+------------+------------------------+-------------+ SELECT 4 rows in set (... sec) .. _information_schema_key_column_usage: ``key_column_usage`` -------------------- This table may be queried to retrieve primary key information from all user tables: .. hide: cr> create table students (id bigint, department integer, name text, primary key(id, department)) CREATE OK, 1 row affected (... sec) :: cr> select constraint_name, table_name, column_name, ordinal_position ... from information_schema.key_column_usage ... where table_name = 'students' +-----------------+------------+-------------+------------------+ | constraint_name | table_name | column_name | ordinal_position | +-----------------+------------+-------------+------------------+ | students_pkey | students | id | 1 | | students_pkey | students | department | 2 | +-----------------+------------+-------------+------------------+ SELECT 2 rows in set (... sec) .. rubric:: Schema +-------------------------+-------------------------------------------------------------------------+-------------+ | Name | Description | Data Type | +=========================+=========================================================================+=============+ | ``constraint_catalog`` | Refers to ``table_catalog`` | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``constraint_schema`` | Refers to ``table_schema`` | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``constraint_name`` | Name of the constraint | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``table_catalog`` | Refers to ``table_schema`` | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``table_schema`` | Name of the schema that contains the table that contains the constraint | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``table_name`` | Name of the table that contains the constraint | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``column_name`` | Name of the column that contains the constraint | ``TEXT`` | +-------------------------+-------------------------------------------------------------------------+-------------+ | ``ordinal_position`` | Position of the column within the constraint (starts with 1) | ``INTEGER`` | +-------------------------+-------------------------------------------------------------------------+-------------+ .. _is_table_partitions: ``table_partitions`` -------------------- This table can be queried to get information about all :ref:`partitioned tables `, Each partition of a table is represented as one row. The row contains the information table name, schema name, partition ident, and the values of the partition. ``values`` is a key-value object with the :ref:`partition column ` (or columns) as key(s) and the corresponding value as value(s). .. hide: cr> create table a_partitioned_table (id integer, content text) ... partitioned by (content); CREATE OK, 1 row affected (... sec) :: cr> insert into a_partitioned_table (id, content) values (1, 'content_a'); INSERT OK, 1 row affected (... sec) :: cr> alter table a_partitioned_table set (number_of_shards=5); ALTER OK, -1 rows affected (... sec) :: cr> insert into a_partitioned_table (id, content) values (2, 'content_b'); INSERT OK, 1 row affected (... sec) The following example shows a table where the column ``content`` of table ``a_partitioned_table`` has been used to partition the table. The table has two partitions. The partitions are introduced when data is inserted where ``content`` is ``content_a``, and ``content_b``.:: cr> select table_name, table_schema as schema, partition_ident, "values" ... from information_schema.table_partitions ... order by table_name, partition_ident; +---------------------+--------+--------------------+--------------------------+ | table_name | schema | partition_ident | values | +---------------------+--------+--------------------+--------------------------+ | a_partitioned_table | doc | 04566rreehimst2vc4 | {"content": "content_a"} | | a_partitioned_table | doc | 04566rreehimst2vc8 | {"content": "content_b"} | +---------------------+--------+--------------------+--------------------------+ SELECT 2 rows in set (... sec) The second partition has been created after the number of shards for future partitions have been changed on the partitioned table, so they show ``5`` instead of ``4``:: cr> select table_name, partition_ident, ... number_of_shards, number_of_replicas ... from information_schema.table_partitions ... order by table_name, partition_ident; +---------------------+--------------------+------------------+--------------------+ | table_name | partition_ident | number_of_shards | number_of_replicas | +---------------------+--------------------+------------------+--------------------+ | a_partitioned_table | 04566rreehimst2vc4 | 4 | 0-1 | | a_partitioned_table | 04566rreehimst2vc8 | 5 | 0-1 | +---------------------+--------------------+------------------+--------------------+ SELECT 2 rows in set (... sec) ``routines`` ------------ The routines table contains tokenizers, token-filters, char-filters, custom analyzers created by ``CREATE ANALYZER`` statements (see :ref:`sql-ddl-custom-analyzer`), and :ref:`functions ` created by ``CREATE FUNCTION`` statements:: cr> select routine_name, routine_type ... from information_schema.routines ... group by routine_name, routine_type ... order by routine_name asc limit 5; +----------------------+--------------+ | routine_name | routine_type | +----------------------+--------------+ | PathHierarchy | TOKENIZER | | apostrophe | TOKEN_FILTER | | arabic | ANALYZER | | arabic_normalization | TOKEN_FILTER | | arabic_stem | TOKEN_FILTER | +----------------------+--------------+ SELECT 5 rows in set (... sec) For example you can use this table to list existing tokenizers like this:: cr> select routine_name ... from information_schema.routines ... where routine_type='TOKENIZER' ... order by routine_name asc limit 10; +----------------+ | routine_name | +----------------+ | PathHierarchy | | char_group | | classic | | edge_ngram | | keyword | | letter | | lowercase | | ngram | | path_hierarchy | | pattern | +----------------+ SELECT 10 rows in set (... sec) Or get an overview of how many routines and routine types are available:: cr> select count(*), routine_type ... from information_schema.routines ... group by routine_type ... order by routine_type; +----------+--------------+ | count(*) | routine_type | +----------+--------------+ | 45 | ANALYZER | | 3 | CHAR_FILTER | | 16 | TOKENIZER | | 61 | TOKEN_FILTER | +----------+--------------+ SELECT 4 rows in set (... sec) .. rubric:: Schema +--------------------+-------------+ | Name | Data Type | +====================+=============+ | routine_name | ``TEXT`` | +--------------------+-------------+ | routine_type | ``TEXT`` | +--------------------+-------------+ | routine_body | ``TEXT`` | +--------------------+-------------+ | routine_schema | ``TEXT`` | +--------------------+-------------+ | data_type | ``TEXT`` | +--------------------+-------------+ | is_deterministic | ``BOOLEAN`` | +--------------------+-------------+ | routine_definition | ``TEXT`` | +--------------------+-------------+ | specific_name | ``TEXT`` | +--------------------+-------------+ :routine_name: Name of the routine (might be duplicated in case of overloading) :routine_type: Type of the routine. Can be ``FUNCTION``, ``ANALYZER``, ``CHAR_FILTER``, ``TOKEN_FILTER`` or ``TOKEN_FILTER``. :routine_schema: The schema where the routine was defined. If it doesn't apply, then ``NULL``. :routine_body: The language used for the routine implementation. If it doesn't apply, then ``NULL``. :data_type: The return type of the function. If it doesn't apply, then ``NULL``. :is_deterministic: If the routine is deterministic then ``True``, else ``False`` (``NULL`` if it doesn't apply). :routine_definition: The function definition (``NULL`` if it doesn't apply). :specific_name: Used to uniquely identify the function in a schema, even if the function is overloaded. Currently the specific name contains the types of the function arguments. As the format might change in the future, it should be only used to compare it to other instances of ``specific_name``. ``schemata`` ------------ The schemata table lists all existing schemas. The ``blob``, ``information_schema``, and ``sys`` schemas are always available. The ``doc`` schema is available after the first user table is created. :: cr> select schema_name from information_schema.schemata order by schema_name; +--------------------+ | schema_name | +--------------------+ | blob | | doc | | information_schema | | pg_catalog | | sys | +--------------------+ SELECT 5 rows in set (... sec) .. _sql_features: ``sql_features`` ---------------- The ``sql_features`` table outlines supported and unsupported SQL features of CrateDB based to the current SQL standard (see :ref:`sql_supported_features`):: cr> select feature_name, is_supported, sub_feature_id, sub_feature_name ... from information_schema.sql_features ... where feature_id='F501'; +--------------------------------+--------------+----------------+--------------------+ | feature_name | is_supported | sub_feature_id | sub_feature_name | +--------------------------------+--------------+----------------+--------------------+ | Features and conformance views | FALSE | | | | Features and conformance views | TRUE | 1 | SQL_FEATURES view | | Features and conformance views | FALSE | 2 | SQL_SIZING view | | Features and conformance views | FALSE | 3 | SQL_LANGUAGES view | +--------------------------------+--------------+----------------+--------------------+ SELECT 4 rows in set (... sec) +------------------+-----------+----------+ | Name | Data Type | Nullable | +==================+===========+==========+ | feature_id | ``TEXT`` | NO | +------------------+-----------+----------+ | feature_name | ``TEXT`` | NO | +------------------+-----------+----------+ | sub_feature_id | ``TEXT`` | NO | +------------------+-----------+----------+ | sub_feature_name | ``TEXT`` | NO | +------------------+-----------+----------+ | is_supported | ``TEXT`` | NO | +------------------+-----------+----------+ | is_verified_by | ``TEXT`` | YES | +------------------+-----------+----------+ | comments | ``TEXT`` | YES | +------------------+-----------+----------+ :feature_id: Identifier of the feature :feature_name: Descriptive name of the feature by the Standard :sub_feature_id: Identifier of the sub feature; If it has zero-length, this is a feature :sub_feature_name: Descriptive name of the sub feature by the Standard; If it has zero-length, this is a feature :is_supported: ``YES`` if the feature is fully supported by the current version of CrateDB, ``NO`` if not :is_verified_by: Identifies the conformance test used to verify the claim; Always ``NULL`` since the CrateDB development group does not perform formal testing of feature conformance :comments: Either ``NULL`` or shows a comment about the supported status of the feature .. _character_sets: ``character_sets`` ------------------ The ``character_sets`` table identifies the character sets available in the current database. In CrateDB there is always a single entry listing `UTF8`:: cr> SELECT character_set_name, character_repertoire FROM information_schema.character_sets; +--------------------+----------------------+ | character_set_name | character_repertoire | +--------------------+----------------------+ | UTF8 | UCS | +--------------------+----------------------+ SELECT 1 row in set (... sec) .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``character_set_catalog`` - ``TEXT`` - Not implemented, this column is always null. * - ``character_set_schema`` - ``TEXT`` - Not implemented, this column is always null. * - ``character_set_name`` - ``TEXT`` - Name of the character set. * - ``character_repertoire`` - ``TEXT`` - Character repertoire. * - ``form_of_use`` - ``TEXT`` - Character encoding form, same as ``character_set_name``. * - ``default_collate_catalog`` - ``TEXT`` - Name of the database containing the default collation (Always ``crate``). * - ``default_collate_schema`` - ``TEXT`` - Name of the schema containing the default collation (Always ``NULL``). * - ``default_collate_name`` - ``TEXT`` - Name of the default collation (Always ``NULL``). .. _foreign_servers: ``foreign_servers`` ------------------- Lists foreign servers created using :ref:`ref-create-server`. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``foreign_server_catalog`` - ``TEXT`` - Name of the database of the foreign server. Always ``crate``. * - ``foreign_server_name`` - ``TEXT`` - Name of the foreign server. * - ``foreign_data_wrapper_catalog`` - ``TEXT`` - Name of the database that contains the foreign-data wrapper. Always ``crate``. * - ``foreign_data_wrapper_name`` - ``TEXT`` - Name of the foreign-data wrapper used by the foreign server. * - ``foreign_server_type`` - ``TEXT`` - Foreign server type information. Always ``null``. * - ``foreign_server_version`` - ``TEXT`` - Foreign server version information. Always ``null``. * - ``authorization_identifier`` - ``TEXT`` - Name of the user who created the server. .. _foreign_server_options: ``foreign_server_options`` -------------------------- Lists options of foreign servers created using :ref:`ref-create-server`. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``foreign_server_catalog`` - ``TEXT`` - Name of the database that the foreign server is defined in. Always ``crate``. * - ``foreign_server_name`` - ``TEXT`` - Name of the foreign server. * - ``option_name`` - ``TEXT`` - Name of an option. * - ``option_value`` - ``TEXT`` - Value of the option cast to string. .. _foreign_tables: ``foreign_tables`` ------------------ Lists foreign tables created using :ref:`ref-create-foreign-table`. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``foreign_table_catalog`` - ``TEXT`` - Name of the database where the foreign table is defined in. Always ``crate``. * - ``foreign_table_schema`` - ``TEXT`` - Name of the schema that contains the foreign table. * - ``foreign_table_name`` - ``TEXT`` - Name of the foreign table. * - ``foreign_server_catalog`` - ``TEXT`` - Name of the database where the foreign server is defined in. Always ``crate``. * - ``foreign_server_name`` - ``TEXT`` - Name of the foreign server. .. _foreign_table_options: ``foreign_table_options`` ------------------------- Lists options for foreign tables created using :ref:`ref-create-foreign-table`. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``foreign_table_catalog`` - ``TEXT`` - Name of the database that contains the foreign table. Always ``crate``. * - ``foreign_table_schema`` - ``TEXT`` - Name of the schema that contains the foreign table. * - ``foreign_table_name`` - ``TEXT`` - Name of the foreign table. * - ``option_name`` - ``TEXT`` - Name of an option. * - ``option_value`` - ``TEXT`` - Value of the option cast to string. .. _user_mappings: ``user_mappings`` ----------------- Lists user mappings created for foreign servers. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``authorization_identifier`` - ``TEXT`` - Name of the user being mapped. * - ``foreign_server_catalog`` - ``TEXT`` - Name of the database of the foreign server. Always ``crate``. * - ``foreign_server_name`` - ``TEXT`` - Name of the foreign server for this user mapping. .. _user_mapping_options: ``user_mapping_options`` ------------------------ Lists the options for user mappings created for foreign servers. See :ref:`administration-fdw`. .. list-table:: :header-rows: 1 * - Column Name - Return Type - Description * - ``authorization_identifier`` - ``TEXT`` - Name of the user being mapped. * - ``foreign_server_catalog`` - ``TEXT`` - Name of the database of the foreign server. Always ``crate``. * - ``foreign_server_name`` - ``TEXT`` - Name of the foreign server for this user mapping. * - ``option_name`` - ``TEXT`` - Name of an option. * - ``option_value`` - ``TEXT`` - Value of the option. The value is visible only to the user being mapped and to superusers otherwise it will show as a ``NULL``... _administration_user_management: ========================== Users and roles management ========================== Users and roles account information is stored in the cluster metadata of CrateDB and supports the following statements to create, alter and drop users and roles: * `CREATE USER`_ * `CREATE ROLE`_ * `ALTER USER`_ or `ALTER ROLE`_ * `DROP USER`_ or `DROP ROLE`_ These statements are database management statements that can be invoked by superusers that already exist in the CrateDB cluster. The `CREATE USER`_, `CREATE ROLE`_, `DROP USER`_ and `DROP ROLE`_ statements can also be invoked by users with the ``AL`` privilege. `ALTER USER`_ or `ALTER ROLE`_ can be invoked by users to change their own password, without requiring any privilege. When CrateDB is started, the cluster contains one predefined superuser. This user is called ``crate``. It is not possible to create any other superusers. The definition of all users and roles, including hashes of their passwords, together with their :ref:`privileges ` is backed up together with the cluster's metadata when a snapshot is created, and it is restored when using the ``ALL``, ``METADATA``, or ``USERMANAGEMENT`` keywords with the :ref:`sql-restore-snapshot` command. .. rubric:: Table of contents .. contents:: :local: ``ROLES`` --------- Roles are entities that are **not** allowed to login, but can be assigned privileges and they can be granted to other roles, thus creating a role hierarchy, or directly to users. For example, a role ``myschema_dql_role`` can be granted with ``DQL`` privileges on schema ``myschema`` and afterwards the role can be :ref:`granted ` to a user, which will automatically :ref:`inherit ` those privileges from the ``myschema_dql_role``. A role ``myschema_dml_role`` can be granted with ``DML`` privileges on schema ``myschema`` and can also be granted the role ``myschema_dql_role``, thus gaining also ``DQL`` privileges. When ``myschema_dml_role`` is granted to a user, this user will automatically have both ``DQL`` and ``DML`` privileges on ``myschema``. ``CREATE ROLE`` =============== To create a new role for the CrateDB database cluster use the :ref:`ref-create-role` SQL statement:: cr> CREATE ROLE role_a; CREATE OK, 1 row affected (... sec) .. TIP:: Newly created roles do not have any privileges. After creating a role, you should :ref:`configure user privileges `. For example, to grant all privileges to the ``role_a`` user, run:: cr> GRANT ALL PRIVILEGES TO role_a; GRANT OK, 4 rows affected (... sec) .. hide: cr> REVOKE ALL PRIVILEGES FROM role_a; REVOKE OK, 4 rows affected (... sec) The name parameter of the statement follows the principles of an identifier which means that it must be double-quoted if it contains special characters (e.g. whitespace) or if the case needs to be maintained:: cr> CREATE ROLE "Custom Role"; CREATE OK, 1 row affected (... sec) If a role or user with the name specified in the SQL statement already exists the statement returns an error:: cr> CREATE ROLE "Custom Role"; RoleAlreadyExistsException[Role 'Custom Role' already exists] .. hide: cr> DROP ROLE "Custom Role"; DROP OK, 1 row affected (... sec) ``ALTER ROLE`` ============== :ref:`ref-alter-role` and :ref:`ref-alter-user` SQL statements are not supported for roles, only for users. ``DROP ROLE`` ============= .. hide: cr> CREATE ROLE role_c; CREATE OK, 1 row affected (... sec) .. hide: cr> CREATE ROLE role_d; CREATE OK, 1 row affected (... sec) To remove an existing role from the CrateDB database cluster use the :ref:`ref-drop-role` or :ref:`ref-drop-user` SQL statement:: cr> DROP ROLE role_c; DROP OK, 1 row affected (... sec) :: cr> DROP USER role_d; DROP OK, 1 row affected (... sec) If a role with the name specified in the SQL statement does not exist, the statement returns an error:: cr> DROP ROLE role_d; RoleUnknownException[Role 'role_d' does not exist] List roles ========== .. hide: cr> CREATE ROLE role_b; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE role_c; CREATE OK, 1 row affected (... sec) cr> GRANT role_c TO role_b; GRANT OK, 1 row affected (... sec) CrateDB exposes database roles via the read-only :ref:`sys-roles` system table. The ``sys.roles`` table shows all roles in the cluster which can be used to group privileges. To list all existing roles query the table:: cr> SELECT name, granted_roles FROM sys.roles order by name; +--------+------------------------------------------+ | name | granted_roles | +--------+------------------------------------------+ | role_a | [] | | role_b | [{"grantor": "crate", "role": "role_c"}] | | role_c | [] | +--------+------------------------------------------+ SELECT 3 rows in set (... sec) ``USERS`` --------- ``CREATE USER`` =============== To create a new user for the CrateDB database cluster use the :ref:`ref-create-user` SQL statement:: cr> CREATE USER user_a; CREATE OK, 1 row affected (... sec) .. TIP:: Newly created users do not have any privileges. After creating a user, you should :ref:`configure user privileges `. For example, to grant all privileges to the ``user_a`` user, run:: cr> GRANT ALL PRIVILEGES TO user_a; GRANT OK, 4 rows affected (... sec) .. hide: cr> REVOKE ALL PRIVILEGES FROM user_a; REVOKE OK, 4 rows affected (... sec) It can be used to connect to the database cluster using available authentication methods. You can specify the user's password in the ``WITH`` clause of the ``CREATE`` statement. This is required if you want to use the :ref:`auth_password`:: cr> CREATE USER user_b WITH (password = 'a_secret_password'); CREATE OK, 1 row affected (... sec) The username parameter of the statement follows the principles of an identifier which means that it must be double-quoted if it contains special characters (e.g. whitespace) or if the case needs to be maintained:: cr> CREATE USER "Custom User"; CREATE OK, 1 row affected (... sec) If a user with the username specified in the SQL statement already exists the statement returns an error:: cr> CREATE USER "Custom User"; RoleAlreadyExistsException[Role 'Custom User' already exists] .. hide: cr> DROP USER "Custom User"; DROP OK, 1 row affected (... sec) .. _administration_user_management_alter_user: ``ALTER USER`` ============== To alter the password for an existing user from the CrateDB database cluster use the :ref:`ref-alter-role` or :ref:`ref-alter-user` SQL statements:: cr> ALTER USER user_a SET (password = 'pass'); ALTER OK, 1 row affected (... sec) The password can be reset (cleared) if specified as ``NULL``:: cr> ALTER USER user_a SET (password = NULL); ALTER OK, 1 row affected (... sec) .. NOTE:: The built-in superuser ``crate`` has no password and it is not possible to set a new password for this user. To add or alter :ref:`session settings ` use the following SQL statement:: cr> ALTER USER user_b SET (search_path = 'myschema', statement_timeout = '10m'); ALTER OK, 1 row affected (... sec) To reset a :ref:`session setting ` to its default value use the following SQL statement:: cr> ALTER USER user_b RESET statement_timeout; ALTER OK, 1 row affected (... sec) .. hide: cr> ALTER USER user_a SET (search_path = 'new_schema', statement_timeout = '1h'); ALTER OK, 1 row affected (... sec) To reset all modified :ref:`session setting ` for a user to their default values, use the following SQL statement:: cr> ALTER USER user_a RESET ALL; ALTER OK, 1 row affected (... sec) ``DROP USER`` ============= .. hide: cr> CREATE USER user_c; CREATE OK, 1 row affected (... sec) cr> CREATE USER user_d; CREATE OK, 1 row affected (... sec) To remove an existing user from the CrateDB database cluster use the :ref:`ref-drop-role` or :ref:`ref-drop-user` SQL statements:: cr> DROP USER user_c; DROP OK, 1 row affected (... sec) :: cr> DROP ROLE user_d; DROP OK, 1 row affected (... sec) If a user with the username specified in the SQL statement does not exist the statement returns an error:: cr> DROP USER user_d; RoleUnknownException[Role 'user_d' does not exist] .. NOTE:: It is not possible to drop the built-in superuser ``crate``. List users ========== .. hide: cr> GRANT role_a, role_b TO user_a; GRANT OK, 2 rows affected (... sec) CrateDB exposes database users via the read-only :ref:`sys-users` system table. The ``sys.users`` table shows all users in the cluster which can be used for authentication. The initial superuser ``crate`` which is available for all CrateDB clusters is also part of that list. To list all existing users query the table:: cr> SELECT name, granted_roles, password, session_settings, superuser FROM sys.users order by name; +--------+----------------------------------------------------------------------------------+----------+-----------------------------+-----------+ | name | granted_roles | password | session_settings | superuser | +--------+----------------------------------------------------------------------------------+----------+-----------------------------+-----------+ | crate | [] | NULL | {} | TRUE | | user_a | [{"grantor": "crate", "role": "role_a"}, {"grantor": "crate", "role": "role_b"}] | NULL | {} | FALSE | | user_b | [] | ******** | {"search_path": "myschema"} | FALSE | +--------+----------------------------------------------------------------------------------+----------+-----------------------------+-----------+ SELECT 3 rows in set (... sec) .. NOTE:: CrateDB also supports retrieving the current connected user using the :ref:`system information functions `: :ref:`CURRENT_USER `, :ref:`USER ` and :ref:`SESSION_USER `. .. vale off .. Drop Users & Roles .. hide: cr> DROP USER user_a; DROP OK, 1 row affected (... sec) cr> DROP USER user_b; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_a; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_b; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_c; DROP OK, 1 row affected (... sec).. highlight:: psql .. _administration-privileges: ========== Privileges ========== To execute statements, a user needs to have the required privileges. .. rubric:: Table of contents .. contents:: :local: .. _privileges-intro: Introduction ============ CrateDB has a superuser (``crate``) which has the privilege to do anything. The privileges of other users and roles have to be managed using the ``GRANT``, ``DENY`` or ``REVOKE`` statements. The privileges that can be granted, denied or revoked are: - ``DQL`` - ``DML`` - ``DDL`` - ``AL`` Skip to :ref:`privilege_types` for details. .. _privileges-classes: Privilege Classes ================= The privileges can be granted on different classes: - ``CLUSTER`` - ``SCHEMA`` - ``TABLE`` and ``VIEW`` Skip to :ref:`hierarchical_privileges_inheritance` for details. A user with ``AL`` on level ``CLUSTER`` can grant privileges they have themselves to other users or roles as well. .. _privilege_types: Privilege types =============== ``DQL`` ....... Granting ``Data Query Language (DQL)`` privilege to a user or role, indicates that this user/role is allowed to execute ``SELECT``, ``SHOW``, ``REFRESH`` and ``COPY TO`` statements, as well as using the available :ref:`user-defined functions `, on the object for which the privilege applies. ``DML`` ....... Granting ``Data Manipulation Language (DML)`` privilege to a user or role, indicates that this user/role is allowed to execute ``INSERT``, ``COPY FROM``, ``UPDATE`` and ``DELETE`` statements, on the object for which the privilege applies. ``DDL`` ....... Granting ``Data Definition Language (DDL)`` privilege to a user or role, indicates that this user/role is allowed to execute the following statements on objects for which the privilege applies: - ``CREATE TABLE`` - ``DROP TABLE`` - ``CREATE VIEW`` - ``DROP VIEW`` - ``CREATE FUNCTION`` - ``DROP FUNCTION`` - ``CREATE REPOSITORY`` - ``DROP REPOSITORY`` - ``CREATE SNAPSHOT`` - ``DROP SNAPSHOT`` - ``RESTORE SNAPSHOT`` - ``ALTER TABLE`` ``AL`` ...... Granting ``Administration Language (AL)`` privilege to a user or role, enables the user/role to execute the following statements: - ``CREATE USER/ROLE`` - ``DROP USER/ROLE`` - ``SET GLOBAL`` All statements enabled via the ``AL`` privilege operate on a cluster level. So granting this on a schema or table level will have no effect. .. _hierarchical_privileges_inheritance: Hierarchical inheritance of privileges ====================================== .. vale off .. hide: cr> CREATE USER riley; CREATE OK, 1 row affected (... sec) cr> CREATE USER kala; CREATE OK, 1 row affected (... sec) cr> CREATE TABLE IF NOT EXISTS doc.accounting ( ... id integer primary key, ... name text, ... joined timestamp with time zone ... ) clustered by (id); CREATE OK, 1 row affected (... sec) cr> INSERT INTO doc.accounting ... (id, name, joined) ... VALUES (1, 'Jon', 0); INSERT OK, 1 row affected (... sec) cr> REFRESH TABLE doc.accounting REFRESH OK, 1 row affected (... sec) .. vale on Privileges can be managed on three different levels, namely: ``CLUSTER``, ``SCHEMA``, and ``TABLE``/``VIEW``. When a privilege is assigned on a certain level, the privilege will propagate down the hierarchy. Privileges defined on a lower level will always override those from a higher level: .. code-block:: none cluster || schema / \ table view This statement will grant ``DQL`` privilege to user ``riley`` on all the tables and :ref:`functions ` of the ``doc`` schema:: cr> GRANT DQL ON SCHEMA doc TO riley; GRANT OK, 1 row affected (... sec) This statement will deny ``DQL`` privilege to user ``riley`` on the ``doc`` schema table ``doc.accounting``. However, ``riley`` will still have ``DQL`` privilege on all the other tables of the ``doc`` schema:: cr> DENY DQL ON TABLE doc.accounting TO riley; DENY OK, 1 row affected (... sec) .. NOTE:: In CrateDB, schemas are just namespaces that are created and dropped implicitly. Therefore, when ``GRANT``, ``DENY`` or ``REVOKE`` are invoked on a schema level, CrateDB takes the schema name provided without further validation. Privileges can be managed on all schemas and tables of the cluster, except the ``information_schema``. Views are on the same hierarchy with tables, i.e. a privilege on a view is gained through a ``GRANT`` on either the view itself, the schema the view belongs to, or a cluster-wide privilege. Privileges on relations which are referenced in the view do not grant any privileges on the view itself. On the contrary, even if the user/role does not have any privileges on a view's referenced relations but on the view itself, the user/role can still access the relations through the view. For example:: cr> CREATE VIEW first_customer as SELECT * from doc.accounting ORDER BY id LIMIT 1 CREATE OK, 1 row affected (... sec) Previously we had issued a ``DENY`` for user ``riley`` on ``doc.accounting`` but we can still access it through the view because we have access to it through the ``doc`` schema:: cr> SELECT id from first_customer; +----+ | id | +----+ | 1 | +----+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`Views: Privileges ` Behavior of ``GRANT``, ``DENY`` and ``REVOKE`` ============================================== .. NOTE:: You can only grant, deny, or revoke privileges for an existing user or role. You must first :ref:`create a user/role ` and then configure privileges. ``GRANT`` ......... .. hide: cr> CREATE USER wolfgang; CREATE OK, 1 row affected (... sec) cr> CREATE USER will; CREATE OK, 1 row affected (... sec) cr> CREATE TABLE IF NOT EXISTS doc.books ( ... first_column integer primary key, ... second_column text); CREATE OK, 1 row affected (... sec) To grant a privilege to an existing user or role on the whole cluster, we use the :ref:`ref-grant` SQL statement, for example:: cr> GRANT DML TO wolfgang; GRANT OK, 1 row affected (... sec) ``DQL`` privilege can be granted on the ``sys`` schema to user ``wolfgang``, like this:: cr> GRANT DQL ON SCHEMA sys TO wolfgang; GRANT OK, 1 row affected (... sec) The following statement will grant all privileges on table doc.books to user ``wolfgang``:: cr> GRANT ALL PRIVILEGES ON TABLE doc.books TO wolfgang; GRANT OK, 4 rows affected (... sec) Using "ALL PRIVILEGES" is a shortcut to grant all the :ref:`currently grantable privileges ` to a user or role. .. NOTE:: If no schema is specified in the table ``ident``, the table will be looked up in the current schema. If a user/role with the name specified in the SQL statement does not exist the statement returns an error:: cr> GRANT DQL TO layla; RoleUnknownException[Role 'layla' does not exist] To grant ``ALL PRIVILEGES`` to user will on the cluster, we can use the following syntax:: cr> GRANT ALL PRIVILEGES TO will; GRANT OK, 4 rows affected (... sec) Using ``ALL PRIVILEGES`` is a shortcut to grant all the currently grantable privileges to a user or role, namely ``DQL``, ``DML`` and ``DDL``. Privileges can be granted to multiple users/roles in the same statement, like so:: cr> GRANT DDL ON TABLE doc.books TO wolfgang, will; GRANT OK, 1 row affected (... sec) ``DENY`` ........ To deny a privilege to an existing user or role on the whole cluster, use the :ref:`ref-deny` SQL statement, for example:: cr> DENY DDL TO will; DENY OK, 1 row affected (... sec) ``DQL`` privilege can be denied on the ``sys`` schema to user ``wolfgang`` like this:: cr> DENY DQL ON SCHEMA sys TO wolfgang; DENY OK, 1 row affected (... sec) The following statement will deny ``DQL`` privilege on table doc.books to user ``wolfgang``:: cr> DENY DQL ON TABLE doc.books TO wolfgang; DENY OK, 1 row affected (... sec) ``DENY ALL`` or ``DENY ALL PRIVILEGES`` will deny all privileges to a user or role, on the cluster it can be used like this:: cr> DENY ALL TO will; DENY OK, 3 rows affected (... sec) ``REVOKE`` .......... To revoke a privilege that was previously granted or denied to a user or role use the :ref:`ref-revoke` SQL statement, for example the ``DQL`` privilege that was previously denied to user ``wolfgang`` on the ``sys`` schema, can be revoked like this:: cr> REVOKE DQL ON SCHEMA sys FROM wolfgang; REVOKE OK, 1 row affected (... sec) The privileges that were granted and denied to user ``wolfgang`` on doc.books can be revoked like this:: cr> REVOKE ALL ON TABLE doc.books FROM wolfgang; REVOKE OK, 4 rows affected (... sec) The privileges that were granted to user ``will`` on the cluster can be revoked like this:: cr> REVOKE ALL FROM will; REVOKE OK, 4 rows affected (... sec) .. NOTE:: The ``REVOKE`` statement can remove only privileges that have been granted or denied through the ``GRANT`` or ``DENY`` statements. If the privilege on a specific object was not explicitly granted, the ``REVOKE`` statement has no effect. The effect of the ``REVOKE`` statement will be reflected in the row count. .. NOTE:: When a privilege is revoked from a user or role, it can still be active for that user/role, if the user/role :ref:`inherits ` it, from another role. List privileges =============== CrateDB exposes the privileges of users and roles of the database through the :ref:`sys.privileges ` system table. By querying the ``sys.privileges`` table you can get all information regarding the existing privileges. E.g.:: cr> SELECT * FROM sys.privileges order by grantee, class, ident; +---------+----------+---------+----------------+-------+------+ | class | grantee | grantor | ident | state | type | +---------+----------+---------+----------------+-------+------+ | SCHEMA | riley | crate | doc | GRANT | DQL | | TABLE | riley | crate | doc.accounting | DENY | DQL | | TABLE | will | crate | doc.books | GRANT | DDL | | CLUSTER | wolfgang | crate | NULL | GRANT | DML | +---------+----------+---------+----------------+-------+------+ SELECT 4 rows in set (... sec) .. hide: cr> DROP user riley; DROP OK, 1 row affected (... sec) cr> DROP user kala; DROP OK, 1 row affected (... sec) cr> DROP TABLE IF EXISTS doc.accounting; DROP OK, 1 row affected (... sec) cr> DROP user wolfgang; DROP OK, 1 row affected (... sec) cr> DROP user will; DROP OK, 1 row affected (... sec) cr> DROP TABLE IF EXISTS doc.books; DROP OK, 1 row affected (... sec) cr> DROP VIEW first_customer; DROP OK, 1 row affected (... sec) .. _roles_inheritance: Roles inheritance ================= .. hide: cr> CREATE USER john; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE role_a; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE role_b; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE role_c; CREATE OK, 1 row affected (... sec) Introduction ............ You can grant, or revoke roles for an existing user or role. This allows to group granted or denied privileges and inherit them to other users or roles. You must first :ref:`create usesr and roles ` and then grant roles to other roles or users. You can configure the privileges of each role before or after granting roles to other roles or users. .. NOTE:: Roles can be granted to other roles or users, but users (roles which can also login to the database) cannot be granted to other roles or users. .. NOTE:: Superuser ``crate`` cannot be granted to other users or roles, and roles cannot be granted to it. Inheritance ........... The inheritance can span multiple levels, so you can have ``role_a`` which is granted to ``role_b``, which in turn is granted to ``role_c``, and so on. Each role can be granted to multiple other roles and each role or user can be granted multiple other roles. Cycles cannot be created, for example:: cr> GRANT role_a TO role_b; GRANT OK, 1 row affected (... sec) :: cr> GRANT role_b TO role_c; GRANT OK, 1 row affected (... sec) :: cr> GRANT role_c TO role_a; SQLParseException[Cannot grant role role_c to role_a, role_a is a parent role of role_c and a cycle will be created] .. hide: cr> REVOKE role_b FROM role_c; REVOKE OK, 1 row affected (... sec) cr> REVOKE role_a FROM role_b; REVOKE OK, 1 row affected (... sec) Privileges resolution ..................... When a user executes a statement, the privileges mechanism will check first if the user has been granted the required privileges, if not, it will check if the roles which this user has been granted have those privileges and if not, it will continue checking the roles granted to those parent roles of the user and so on. For example:: cr> GRANT role_a TO role_b; GRANT OK, 1 row affected (... sec) :: cr> GRANT role_b TO role_c; GRANT OK, 1 row affected (... sec) :: cr> GRANT DQL ON TABLE sys.users TO role_a; GRANT OK, 1 row affected (... sec) :: cr> GRANT role_c TO john; GRANT OK, 1 row affected (... sec) User ``john`` is able to query ``sys.users``, as even though he lacks ``DQL`` privilege on the table, he is granted ``role_c`` which in turn is granted ``role_b`` which is granted ``role_a``, and ``role`` has the ``DQL`` privilege on ``sys.users``. .. hide: cr> REVOKE role_c FROM john; REVOKE OK, 1 row affected (... sec) cr> REVOKE role_b FROM role_c; REVOKE OK, 1 row affected (... sec) cr> REVOKE role_a FROM role_b; REVOKE OK, 1 row affected (... sec) cr> REVOKE DQL ON TABLE sys.users FROM role_a; REVOKE OK, 1 row affected (... sec) Keep in mind that ``DENY`` has precedence over ``GRANT``. If a role has been both granted and denied a privilege (directly or through role inheritance), then ``DENY`` will take effect. For example, ``GRANT`` is inherited from a role and ``DENY`` directly set on the user:: cr> GRANT DQL ON TABLE sys.users TO role_a; GRANT OK, 1 row affected (... sec) :: cr> GRANT role_a TO john GRANT OK, 1 row affected (... sec) :: cr> DENY DQL ON TABLE sys.users TO john DENY OK, 1 row affected (... sec) User ``john`` cannot query ``sys.users``. .. hide: cr> REVOKE role_a FROM john; REVOKE OK, 1 row affected (... sec) cr> REVOKE DQL ON TABLE sys.users FROM role_a; REVOKE OK, 1 row affected (... sec) Another example with ``DENY`` in effect, inherited from a role:: cr> GRANT DQL ON TABLE sys.users TO role_a; GRANT OK, 1 row affected (... sec) :: cr> DENY DQL ON TABLE sys.users TO role_b; DENY OK, 1 row affected (... sec) :: cr> GRANT role_a, role_b TO john; GRANT OK, 2 rows affected (... sec) User ``john`` cannot query ``sys.users``. .. hide: cr> DROP USER john; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_c; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_b; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_a; DROP OK, 1 row affected (... sec) .. _granting_roles: ``GRANT`` ......... .. hide: cr> CREATE ROLE role_dql; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE role_all_on_books; CREATE OK, 1 row affected (... sec) cr> CREATE USER wolfgang; CREATE OK, 1 row affected (... sec) cr> CREATE USER will; CREATE OK, 1 row affected (... sec) cr> CREATE USER layla; CREATE OK, 1 row affected (... sec) cr> CREATE TABLE IF NOT EXISTS doc.books ( ... first_column integer primary key, ... second_column text); CREATE OK, 1 row affected (... sec) To grant an existing role to an existing user or role on the whole cluster, we use the :ref:`ref-grant` SQL statement, for example:: cr> GRANT role_dql TO wolfgang; GRANT OK, 1 row affected (... sec) ``DML`` privilege can be granted on the ``sys`` schema to role ``role_dml``, so, by inheritance, to user ``wolfgang`` as well, like this:: cr> GRANT DQL ON SCHEMA sys TO role_dql; GRANT OK, 1 row affected (... sec) The following statements will grant all privileges on table doc.books to role ``role_all_on_books``, and by inheritance to user ``wolfgang`` as well:: cr> GRANT role_all_on_books TO wolfgang; GRANT OK, 1 row affected (... sec) :: cr> GRANT ALL PRIVILEGES ON TABLE doc.books TO role_all_on_books; GRANT OK, 4 rows affected (... sec) If a role with the name specified in the SQL statement does not exist the statement returns an error:: cr> GRANT DDL TO role_ddl; RoleUnknownException[Role 'role_ddl' does not exist] Multiple roles can be granted to multiple users/roles in the same statement, like so:: cr> GRANT role_dql, role_all_on_books TO layla, will; GRANT OK, 4 rows affected (... sec) Notice that `4 rows` affected is returned, as in total there are 2 users, ``will`` and ``layla`` and each of them is granted two roles: ``role_dql`` and ``role_all_on_books``. ``REVOKE`` .......... To revoke a role that was previously granted to a user or role use the :ref:`ref-revoke` SQL statement. For example role ``role_dql`` which was previously granted to users ``wolfgang``,``layla`` and ``will``, can be revoked like this:: cr> REVOKE role_dql FROM wolfgang, layla, will; REVOKE OK, 3 rows affected (... sec) If a privilege is revoked from a role which is granted to other roles or users, the privilege is automatically revoked also for those roles and users, for example if we revoke privileges on table ``doc.books`` from ``role_all_on_books``:: cr> REVOKE ALL PRIVILEGES ON TABLE doc.books FROM role_all_on_books; REVOKE OK, 4 rows affected (... sec) user ``wolfgang``, who is granted the role ``role_all_on_books``, also looses those privileges. .. hide: cr> CREATE ROLE role_dml; CREATE OK, 1 row affected (... sec) cr> CREATE ROLE john; CREATE OK, 1 row affected (... sec) If a user is granted the same privilege by inheriting two different roles, when revoking one of the roles, the user still keeps the privilege. For example if user ``john`` gets granted ```role_dql`` and ``role_dml``:: cr> GRANT DQL TO role_dql; GRANT OK, 1 row affected (... sec) :: cr> GRANT DQL, DML TO role_dml; GRANT OK, 2 rows affected (... sec) :: cr> GRANT role_dql, role_dml TO john; GRANT OK, 2 rows affected (... sec) and then we revoke ``role_dql`` from ``john``:: cr> REVOKE role_dql FROM john; REVOKE OK, 1 row affected (... sec) ``john`` still has ``DQL`` privilege since it inherits it from ``role_dml`` which is still granted to him. .. hide: cr> DROP USER wolfgang; DROP OK, 1 row affected (... sec) cr> DROP USER will; DROP OK, 1 row affected (... sec) cr> DROP USER layla; DROP OK, 1 row affected (... sec) cr> DROP USER john; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_dql; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_dml; DROP OK, 1 row affected (... sec) cr> DROP ROLE role_all_on_books; DROP OK, 1 row affected (... sec) cr> DROP TABLE doc.books; DROP OK, 1 row affected (... sec)
The multi-tenancy is an architecture in which different tenants share a single software instance. CrateDB does not support the creation of multiple databases and catalogs as some other solutions (e.g., PostgreSQL). However, there are several ways to implement multi-tenancy in CrateDB, and, as is often the case, which one works the best depends on a variety of options and trade-offs. In this article, we will illustrate two methods for sharing a single CrateDB instance between multiple tenants. # Schema-based multi-tenancy In schema-based multi-tenancy, every tenant has its own database schema. CrateDB supports the creation of tables in different schemas ([Schemas - CrateDB Reference](https://crate.io/docs/crate/reference/en/latest/general/ddl/create-table.html#schemas)). The following statements illustrate the creation of two tables with different schemas: ```sql CREATE TABLE "tenantA"."table1" ( id int, name text, ); CREATE TABLE "tenantB"."table2" ( id int, address text, ); ``` In this example, we created the first table inside schema ``tenantA`` and the second table inside schema ``tenantB``. Furthermore, access privileges can be administrated on the `SCHEMA` level to restrict access for tenant users only on their schema. The schema-based multi-tenancy has a couple of benefits: * Schema changes are independent of other tenants inside CrateDB * Less risk of data leakage due to data isolation * Application code does not have to be tenant-aware However, there are some drawbacks: * More complexity, as this approach requires the creation of different schemas for different tenants * Performance considerations, such as sharding and partitioning, need to be done for every tenant individually (depending on the expected data volume) * Higher risk of querying incorrect schema * Risk of getting close to the maximum number of shards ([cluster.max_shards_per_node](https://crate.io/docs/crate/reference/en/5.3/config/cluster.html#shard-limits)) if there is a significant number of tenants # Table-based multi-tenancy In table-based multi-tenancy, all data resides in the same table, but it’s separated by a discriminator column. In this case, each query needs a `WHERE` statement to select data based on the tenant context. The following example illustrates table creation with a separate `tenant` column. ```sql CREATE TABLE "doc"."name" ( id int, name text, price int, tenant name ); ``` Record-based access control is not possible in this scenario. However, you can create a `VIEW` that is restricted to a single tenant. Without the usage of views, data isolation must be guaranteed on the application level. Table-based multi-tenancy has some benefits: * The application doesn't need to worry about which schema it is connecting to * There is only one schema to maintain * Performance considerations are easier to make, as you don't need to differentiate between tenants with high and low data volume in your sharding and partitioning strategy * Data is shared across all tenants Drawbacks are: * Application code needs to be tenant-aware * Schema changes affect all tenants * Possible data leaks as record-based access control are not possible Finally, if you need full data isolation between different tenants, then you must run a separate CrateDB cluster for each tenant. # Configuring access privileges with CrateDB The privileges of CrateDB users have to be managed using the `GRANT`, `DENY` or `REVOKE` statements ([Privileges](https://crate.io/docs/crate/reference/en/4.8/admin/privileges.html)). CrateDB supports four different privilege types: * Data Query Language (DQL) * Data Manipulation Language (DML) * Data Definition Language (DDL) * Administration Language (AL) These privileges can be granted on `CLUSTER`, `SCHEMA`, `TABLE`, and `VIEW` levels. In schema-based multi-tenancy, you can grant a user full privileges on schema `tenantA` using the following statement: ```sql GRANT ALL PRIVILEGES ON SCHEMA tenantA TO tenantA_user1; ``` Similarly, in table-based multi-tenancy you can grant `DQL` privilege for a specific tenant view: ```sql GRANT DQL ON VIEW tenantA_view TO tenantA_user1; ``` # Summary This short article covers the main approaches to multi-tenancy with CrateDB: schema-based and table-based multi-tenancy. We also outlined the benefits and drawbacks of each approach, and which one works best for you depends on your use case and goals. If you find this article interesting and want to learn more about CrateDB, visit our [official documentation](https://crate.io/docs/crate/reference/en/4.8/) and check our tutorials on [CrateDB Community](https://community.cratedb.com/).
.. _sql: ========== SQL syntax ========== You can use :ref:`Structured Query Language ` (SQL) to query your data. This section of the documentation provides a complete SQL syntax reference for CrateDB. .. NOTE:: For introductions to CrateDB functionality, we recommend you consult the appropriate top-level section of the documentation. The SQL syntax reference assumes a basic familiarity with the relevant parts of CrateDB. .. SEEALSO:: :ref:`General use: Data definition ` :ref:`General use: Data manipulation ` :ref:`General use: Querying ` :ref:`General use: Built-in functions and operators ` .. toctree:: :maxdepth: 2 general/index statements/index.. highlight:: psql .. _sql-create-table: ================ ``CREATE TABLE`` ================ Create a new table. .. rubric:: Table of contents .. contents:: :local: .. _sql-create-table-synopsis: Synopsis ======== :: CREATE TABLE [ IF NOT EXISTS ] table_ident ( [ { base_column_definition | generated_column_definition | table_constraint } [, ... ] ] ) [ PARTITIONED BY (column_name [, ...] ) ] [ CLUSTERED [ BY (routing_column) ] INTO num_shards SHARDS ] [ WITH ( table_parameter [= value] [, ... ] ) ] where ``base_column_definition``:: column_name data_type [ DEFAULT default_expr ] [ column_constraint [ ... ] ] [ storage_options ] where ``generated_column_definition`` is:: column_name [ data_type ] [ GENERATED ALWAYS ] AS [ ( ] generation_expression [ ) ] [ column_constraint [ ... ] ] where ``column_constraint`` is:: { [ CONSTRAINT constraint_name ] PRIMARY KEY | NULL | NOT NULL | INDEX { OFF | USING { PLAIN | FULLTEXT [ WITH ( analyzer = analyzer_name ) ] } [ CONSTRAINT constraint_name ] CHECK (boolean_expression) } where ``storage_options`` is:: STORAGE WITH ( option = value_expression [, ... ] ) and ``table_constraint`` is:: { [ CONSTRAINT constraint_name ] PRIMARY KEY ( column_name [, ... ] ) | INDEX index_name USING FULLTEXT ( column_name [, ... ] ) [ WITH ( analyzer = analyzer_name ) ] [ CONSTRAINT constraint_name ] CHECK (boolean_expression) } .. _sql-create-table-description: Description =========== ``CREATE TABLE`` will create a new, initially empty table. If the ``table_ident`` does not contain a schema, the table is created in the ``doc`` schema. Otherwise it is created in the given schema, which is implicitly created, if it didn't exist yet. A table consists of one or more *base columns* and any number of *generated columns* and/or *table constraints*. The optional constraint clauses specify constraints (tests) that new or updated rows must satisfy for an ``INSERT``, ``UPDATE`` or ``COPY FROM`` operation to succeed. A constraint is an SQL object that helps define the set of valid values in the table in various ways. There are two ways to define constraints: table constraints and column constraints. A column constraint is defined as part of a column definition. A table constraint definition is not tied to a particular column, and it can encompass more than one column. Every column constraint can also be written as a table constraint; a column constraint is only a notational convenience for use when the constraint only affects one column. .. SEEALSO:: :ref:`Data definition: Creating tables ` .. _sql-create-table-elements: Table elements -------------- .. _sql-create-table-base-columns: Base Columns ~~~~~~~~~~~~ A base column is a persistent column in the table metadata. In relational terms it is an attribute of the tuple of the table-relation. It has a name, a type, an optional default clause and optional constraints. Base columns are readable and writable (if the table itself is writable). Values for base columns are given in DML statements explicitly or omitted, in which case their value is null. .. _sql-create-table-default-clause: Default clause ^^^^^^^^^^^^^^ The optional default clause defines the default value of the column. The value is inserted when the column is a target of an ``INSERT`` or ``COPY FROM`` statement that doesn't contain an explicit value for it. The default clause :ref:`expression ` is variable-free, it means that subqueries and cross-references to other columns are not allowed. .. NOTE:: Default values are not allowed for columns of type ``OBJECT``:: cr> CREATE TABLE tbl (obj OBJECT DEFAULT {key='foo'}) SQLParseException[Default values are not allowed for object columns: obj] They are allowed for sub columns of an object column. If an object column has at least one child with a default expression it will implicitly create the full object unless it's within an array. An example:: cr> CREATE TABLE object_defaults (id int, obj OBJECT AS (key TEXT DEFAULT '')) CREATE OK, 1 row affected (... sec) cr> INSERT INTO object_defaults (id) VALUES (1) INSERT OK, 1 row affected (... sec) cr> REFRESH TABLE object_defaults REFRESH OK, 1 row affected (... sec) cr> SELECT obj FROM object_defaults +-------------+ | obj | +-------------+ | {"key": ""} | +-------------+ SELECT 1 row in set (... sec) .. _sql-create-table-generated-columns: Generated columns ~~~~~~~~~~~~~~~~~ A generated column is a persistent column that is computed as needed from the ``generation_expression`` for every ``INSERT``, ``UPDATE`` and ``COPY FROM`` operation. The ``GENERATED ALWAYS`` part of the syntax is optional. .. NOTE:: A generated column is not a virtual column. The computed value is stored in the table like a base column is. The automatic computation of the value is what makes it different. .. SEEALSO:: :ref:`Data definition: Generated columns ` .. _sql-create-table-table-constraints: Table constraints ~~~~~~~~~~~~~~~~~ Table constraints are constraints that are applied to more than one column or to the table as a whole. .. SEEALSO:: - :ref:`General SQL: Table constraints ` - :ref:`CHECK constraint ` .. _sql-create-table-column-constraints: Column constraints ~~~~~~~~~~~~~~~~~~ Column constraints are constraints that are applied on each column of the table separately. .. SEEALSO:: - :ref:`General SQL: Column constraints ` - :ref:`CHECK constraint ` .. _sql-create-table-storage-options: Storage options ~~~~~~~~~~~~~~~ Storage options can be applied on each column of the table separately. .. SEEALSO:: :ref:`Data definition: Storage ` .. _sql-create-table-parameters: Parameters ========== :table_ident: The name (optionally schema-qualified) of the table to be created. :column_name: The name of a column to be created in the new table. :data_type: The :ref:`data type ` of the column. This can include array and object specifiers. :generation_expression: An :ref:`expression ` (usually a :ref:`function call `) that is applied in the context of the current row. As such, it can reference other base columns of the table. Referencing other generated columns (including itself) is not supported. The generation expression is :ref:`evaluated ` each time a row is inserted or the referenced base columns are updated. .. _sql-create-table-if-not-exists: ``IF NOT EXISTS`` ================= If the optional ``IF NOT EXISTS`` clause is used, this statement won't do anything if the table exists already, and ``0`` rows will be returned. .. _sql-create-table-clustered: ``CLUSTERED`` ============= The optional ``CLUSTERED`` clause specifies how a table should be distributed across a cluster. :: [ CLUSTERED [ BY (routing_column) ] INTO num_shards SHARDS ] :num_shards: Specifies the number of :ref:`shards ` a table is stored in. Must be greater than 0. If not provided, the number of shards is calculated based on the number of currently active data nodes with the following formula:: num_shards = max(4, num_data_nodes * 2) .. NOTE:: The minimum value of ``num_shards`` is set to ``4``. This means if the calculation of ``num_shards`` does not exceeds its minimum it applies the minimum value to each table or partition as default. :routing_column: Specify a :ref:`routing column ` that :ref:`determines ` how rows are sharded. All rows that have the same ``routing_column`` row value are stored in the same shard. If a :ref:`primary key ` has been defined, it will be used as the default routing column, otherwise the :ref:`internal document ID ` is used. .. SEEALSO:: :ref:`Data definition: Sharding ` .. _sql-create-table-partitioned-by: ``PARTITIONED BY`` ================== The ``PARTITIONED`` clause splits the created table into separate :ref:`partitions ` for every distinct combination of row values in the specified :ref:`partition columns `. :: [ PARTITIONED BY ( column_name [ , ... ] ) ] :column_name: The name of a column to be used for partitioning. Multiple columns names can be specified inside the parentheses and must be separated by commas. The following restrictions apply: - Partition columns may not be part of the :ref:`sql-create-table-clustered` clause - Partition columns must only contain :ref:`primitive types ` - Partition columns may not be inside an object array - Partition columns may not be indexed with a :ref:`fulltext index with analyzer ` - If the table has a :ref:`primary_key_constraint` constraint, all of the partition columns must be included in the primary key definition .. CAUTION:: Partition columns :ref:`cannot be altered ` by an ``UPDATE`` statement. .. _sql-create-table-with: ``WITH`` ======== The optional ``WITH`` clause can specify parameters for tables. :: [ WITH ( table_parameter [= value] [, ... ] ) ] :table_parameter: Specifies an optional parameter for the table. .. NOTE:: Some parameters are nested, and therefore need to be wrapped in double quotes in order to be set. For example:: WITH ("allocation.max_retries" = 5) Nested parameters are those that contain a ``.`` between parameter names (e.g. ``write.wait_for_active_shards``). Available parameters are: .. _sql-create-table-number-of-replicas: ``number_of_replicas`` ---------------------- Specifies the number or range of replicas each shard of a table should have for normal operation, the default is to have ``0-1`` replica. The number of replicas is defined like this:: min_replicas [ - [ max_replicas ] ] :min_replicas: The minimum number of replicas required. :max_replicas: The maximum number of replicas. The actual maximum number of replicas is max(num_replicas, N-1), where N is the number of data nodes in the cluster. If ``max_replicas`` is the string ``all`` then it will always be N-1. .. NOTE:: If the value is provided as a range or the default value ``0-1`` is used, :ref:`cluster.max_shards_per_node ` and :ref:`cluster.routing.allocation.total_shards_per_node ` limits account only for primary shards and not for possible expanded replicas and thus actual number of all shards can exceed those limits. .. SEEALSO:: :ref:`ddl-replication` .. _sql-create-table-number-of-routing-shards: ``number_of_routing_shards`` ---------------------------- This number specifies the hashing space that is used internally to distribute documents across shards. This is an optional setting that enables users to later on increase the number of shards using :ref:`sql-alter-table`. If it's not set explicitly, it's automatically set to a default value based on the number of shards defined in the :ref:`sql-create-table-clustered`, which allows to increase the shards by a factor of `2` each time, up until the maximum of `1024` shards per table. .. NOTE:: It's not possible to update this setting after table creation. .. _sql-create-table-refresh-interval: ``refresh_interval`` -------------------- In CrateDB new written records are not immediately visible. A user has to either invoke the :ref:`REFRESH ` statement or wait for an automatic background refresh. The interval of this background refresh is specified in milliseconds using this ``refresh_interval`` setting. By default it's not specified, which causes tables to be refreshed once every second but only if the table is not idle. A table can become idle if no query accesses it for more than 30 seconds. If a table is idle, the periodic refresh is temporarily disabled. A query hitting an idle table will trigger a refresh and enable the periodic refresh again. When ``refresh_interval`` is set explicitly, table is refreshed regardless of idle state. Use :ref:`ALTER TABLE RESET ` to switch to default 1 second refresh and freeze-on-idle behavior. :value: The refresh interval in milliseconds. A value smaller or equal than 0 turns off the automatic refresh. A value of greater than 0 schedules a periodic refresh of the table. .. NOTE:: A ``refresh_interval`` of 0 does not guarantee that new writes are *NOT* visible to subsequent reads. Only the periodic refresh is disabled. There are other internal factors that might trigger a refresh. .. NOTE:: On partitioned tables, the idle mechanism works per partition. This can be useful for time-based partitions where older partitions are rarely queried. The downside is that if many partitions are idle and a query activates them, there will be a spike in refresh load. If you've such an access pattern, you may want to set an explicit ``refresh_interval`` to have a permanent background refresh. .. SEEALSO:: :ref:`Querying: Refresh ` :ref:`SQL syntax: REFRESH ` .. _sql-create-table-write-wait: .. _sql-create-table-write-wait-for-active-shards: ``write.wait_for_active_shards`` -------------------------------- Specifies the number of shard copies that need to be active for write operations to proceed. If less shard copies are active the operation must wait and retry for up to 30s before timing out. :value: ``all`` or a positive integer up to the total number of configured shard copies (``number_of_replicas + 1``). A value of ``1`` means only the primary has to be active. A value of ``2`` means the primary plus one replica shard has to be active, and so on. The default value is set to ``1``. ``all`` is a special value that means all shards (primary + replicas) must be active for write operations to proceed. Increasing the number of shard copies to wait for improves the resiliency of the system. It reduces the chance of write operations not writing to the desired number of shard copies, but it does not eliminate the possibility completely, because the check occurs before the write operation starts. Replica shard copies that missed some writes will be brought up to date by the system eventually, but in case a node holding the primary copy has a system failure, the replica copy couldn't be promoted automatically as it would lead to data loss since the system is aware that the replica shard didn't receive all writes. In such a scenario, :ref:`ALTER TABLE .. REROUTE PROMOTE REPLICA ` can be used to force the :ref:`allocation ` of a stale replica copy to at least recover the data that is available in the stale replica copy. Say you've a 3 node cluster and a table with 1 configured replica. With ``write.wait_for_active_shards=1`` and ``number_of_replicas=1`` a node in the cluster can be restarted without affecting write operations because the primary copies are either active or the replicas can be quickly promoted. If ``write.wait_for_active_shards`` would be set to ``2`` instead and a node is stopped, the write operations would block until the replica is fully replicated again or the write operations would timeout in case the replication is not fast enough. .. _sql-create-table-blocks: .. _sql-create-table-blocks-read-only: ``blocks.read_only`` -------------------- Allows to have a read only table. :value: Table is read only if value set to ``true``. Allows writes and table settings changes if set to ``false``. .. _sql-create-table-blocks-read-only-allow-delete: ``blocks.read_only_allow_delete`` --------------------------------- Allows to have a read only table that additionally can be deleted. :value: Table is read only and can be deleted if value set to ``true``. Allows writes and table settings changes if set to ``false``. This flag should not be set manually as it's used, in an automated way, by the mechanism that protects CrateDB nodes from running out of available disk space. When a disk on a node exceeds the ``cluster.routing.allocation.disk.watermark.flood_stage`` threshold, this block is applied (set to ``true``) to all tables on that affected node. Once you've freed disk space again and the threshold is undershot, the setting is automatically reset to ``false`` for the affected tables. .. SEEALSO:: :ref:`Cluster-wide settings: Disk-based shard allocation ` .. NOTE:: During maintenance operations, you might want to temporarily disable reads, writes or table settings changes. To achieve this, please use the corresponding settings :ref:`sql-create-table-blocks-read`, :ref:`sql-create-table-blocks-write`, :ref:`sql-create-table-blocks-metadata`, or :ref:`sql-create-table-blocks-read-only`, which must be manually reset after the maintenance operation has been completed. .. _sql-create-table-blocks-read: ``blocks.read`` --------------- ``disable``/``enable`` all the read operations :value: Set to ``true`` to disable all read operations for a table, otherwise set ``false``. .. _sql-create-table-blocks-write: ``blocks.write`` ---------------- ``disable``/``enable`` all the write operations :value: Set to ``true`` to disable all write operations and table settings modifications, otherwise set ``false``. .. _sql-create-table-blocks-metadata: ``blocks.metadata`` ------------------- ``disable``/``enable`` the table settings modifications. :values: Disables the table settings modifications if set to ``true``. If set to ``false``, table settings modifications are enabled. .. _sql-create-table-soft-deletes: .. _sql-create-table-soft-deletes-enabled: ``soft_deletes.enabled`` ------------------------ Indicates whether soft deletes are enabled or disabled. Soft deletes allow CrateDB to preserve recent deletions within the Lucene index. This information is used for :ref:`shard recovery `. Before the introduction of soft deletes, CrateDB had to retain the information in the :ref:`Translog `. Using soft deletes uses less storage than the Translog equivalent and is faster. Soft deletes are mandatory in CrateDB 5.0, therefore this setting can no longer be modified. It will always be set to ``true``. The setting will be removed in CrateDB 6.0. .. _sql-create-table-soft-deletes-retention-lease-period: ``soft_deletes.retention_lease.period`` --------------------------------------- The maximum period for which a retention lease is retained before it is considered expired. :value: ``12h`` (default). Any positive time value is allowed. CrateDB sometimes needs to replay operations that were executed on one shard on other shards. For example if a shard copy is temporarily unavailable but write operations to the primary copy continues, the missed operations have to be replayed once the shard copy becomes available again. If soft deletes are enabled, CrateDB uses a Lucene feature to preserve recent deletions in the Lucene index so that they can be replayed. Because of that, deleted documents still occupy disk space, which is why CrateDB only preserves certain recently-deleted documents. CrateDB eventually fully discards deleted documents to prevent the index growing larger despite having deleted documents. CrateDB keeps track of operations it expects to need to replay using a mechanism called *shard history retention leases*. Retention leases are a mechanism that allows CrateDB to determine which soft-deleted operations can be safely discarded. If a shard copy fails, it stops updating its shard history retention lease, indicating that the soft-deleted operations should be preserved for later recovery. However, to prevent CrateDB from holding onto shard retention leases forever, they expire after ``soft_deletes.retention_lease.period``, which defaults to ``12h``. Once a retention lease has expired CrateDB can again discard soft-deleted operations. In case a shard copy recovers after a retention lease has expired, CrateDB will fall back to copying the whole index since it can no longer replay the missing history. .. _sql-create-table-codec: ``codec`` --------- By default data is stored using ``LZ4`` compression. This can be changed to ``best_compression`` which uses ``DEFLATE`` for a higher compression ratio, at the expense of slower column value lookups. :values: ``default`` or ``best_compression`` .. _sql-create-table-store: .. _sql-create-table-store-type: ``store.type`` -------------- The store type setting allows you to control how data is stored and accessed on disk. It's not possible to update this setting after table creation. The following storage types are supported: :fs: Default file system implementation. It will pick the best implementation depending on the operating environment, which is currently ``hybridfs`` on all supported systems but is subject to change. :niofs: The ``NIO FS`` type stores the shard index on the file system (Lucene ``NIOFSDirectory``) using NIO. It allows multiple threads to read from the same file concurrently. :mmapfs: The ``MMap FS`` type stores the shard index on the file system (Lucene ``MMapDirectory``) by mapping a file into memory (mmap). Memory mapping uses up a portion of the virtual memory address space in your process equal to the size of the file being mapped. Before using this type, be sure you have allowed plenty of virtual address space. :hybridfs: The ``hybridfs`` type is a hybrid of ``niofs`` and ``mmapfs``, which chooses the best file system type for each type of file based on the read access pattern. Similarly to ``mmapfs`` be sure you have allowed plenty of virtual address space. It is possible to restrict the use of the ``mmapfs`` and ``hybridfs`` store type via the :ref:`node.store.allow_mmap ` node setting. .. _sql-create-table-mapping: .. _sql-create-table-mapping-total-fields-limit: ``mapping.total_fields.limit`` ------------------------------ Sets the maximum number of columns that is allowed for a table. Default is ``1000``. :value: Maximum amount of fields in the Lucene index mapping. This includes both the user facing mapping (columns) and internal fields. .. _sql-create-table-translog: .. _sql-create-table-translog-flush-threshold-size: ``translog.flush_threshold_size`` --------------------------------- Sets size of transaction log prior to flushing. :value: Size (bytes) of translog. .. _sql-create-table-translog-sync-interval: ``translog.sync_interval`` -------------------------- How often the translog is fsynced to disk. Defaults to 5s. When setting this interval, please keep in mind that changes logged during this interval and not synced to disk may get lost in case of a failure. This setting only takes effect if :ref:`translog.durability ` is set to ``ASYNC``. :value: Interval in milliseconds. .. _sql-create-table-translog-durability: ``translog.durability`` ----------------------- If set to ``ASYNC`` the translog gets flushed to disk in the background every :ref:`translog.sync_interval `. If set to ``REQUEST`` the flush happens after every operation. :value: ``REQUEST`` (default), ``ASYNC`` .. _sql-create-table-routing: .. _sql-create-table-routing-allocation: .. _sql-create-table-routing-allocation.total-shards-per-node: ``routing.allocation.total_shards_per_node`` -------------------------------------------- Controls the total number of shards (replicas and primaries) allowed to be :ref:`allocated ` on a single node. Defaults to unbounded (-1). :value: Number of shards per node. .. _sql-create-table-routing-allocation-enable: ``routing.allocation.enable`` ----------------------------- Controls shard :ref:`allocation ` for a specific table. Can be set to: :all: Allows shard allocation for all shards. (Default) :primaries: Allows shard allocation only for primary shards. :new_primaries: Allows shard allocation only for primary shards for new tables. :none: No shard allocation allowed. .. _sql-create-table-allocation-max-retries: ``allocation.max_retries`` ---------------------------------- Defines the number of attempts to :ref:`allocate ` a shard before giving up and leaving the shard unallocated. :value: Number of retries to allocate a shard. Defaults to 5. .. _sql-create-table-routing-allocation-include: ``routing.allocation.include.{attribute}`` ------------------------------------------ Assign the table to a node whose ``{attribute}`` has at least one of the comma-separated values. This setting overrides the related :ref:`cluster setting ` for the given table, which will then ignore the cluster setting completely. .. SEEALSO:: :ref:`Data definition: Shard allocation filtering ` .. _sql-create-table-routing-allocation-require: ``routing.allocation.require.{attribute}`` ------------------------------------------ Assign the table to a node whose ``{attribute}`` has all of the comma-separated values. This setting overrides the related :ref:`cluster setting ` for the given table which will then ignore the cluster setting completely. .. SEEALSO:: :ref:`Data definition: Shard allocation filtering ` .. _sql-create-table-routing-allocation-exclude: ``routing.allocation.exclude.{attribute}`` ------------------------------------------ Assign the table to a node whose ``{attribute}`` has none of the comma-separated values. This setting overrides the related :ref:`cluster setting ` for the given table which will then ignore the cluster setting completely. .. SEEALSO:: :ref:`Data definition: Shard allocation filtering ` .. _sql-create-table-unassigned: .. _sql-create-table-unassigned.node-left: .. _sql-create-table-unassigned.node-left-delayed-timeout: ``unassigned.node_left.delayed_timeout`` ---------------------------------------- Delay the :ref:`allocation ` of replica shards which have become unassigned because a node has left. It defaults to ``1m`` to give a node time to restart completely (which can take some time when the node has lots of shards). Setting the timeout to ``0`` will start allocation immediately. This setting can be changed on runtime in order to increase/decrease the delayed allocation if needed. .. _sql-create-table-column-policy: ``column_policy`` ----------------- Specifies the column policy of the table. The default column policy is ``strict``. The column policy is defined like this:: WITH ( column_policy = {'dynamic' | 'strict'} ) :strict: Rejecting any column on ``INSERT``, ``UPDATE`` or ``COPY FROM`` which is not defined in the schema :dynamic: New columns can be added using ``INSERT``, ``UPDATE`` or ``COPY FROM``. New columns added to ``dynamic`` tables are, once added, usable as usual columns. One can retrieve them, sort by them and use them in ``WHERE`` clauses. .. SEEALSO:: :ref:`Data definition: Column policy ` .. _sql-create-table-max-ngram-diff: ``max_ngram_diff`` ------------------ Specifies the maximum difference between ``max_ngram`` and ``min_ngram`` when using the ``NGramTokenizer`` or the ``NGramTokenFilter``. The default is 1. .. _sql-create-table-max-shingle-diff: ``max_shingle_diff`` -------------------- Specifies the maximum difference between ``min_shingle_size`` and ``max_shingle_size`` when using the ``ShingleTokenFilter``. The default is 3. .. _sql-create-table-merge: .. _sql-create-table-merge-scheduler: .. _sql-create-table-merge-scheduler-max-thread-count: ``merge.scheduler.max_thread_count`` ------------------------------------ The maximum number of threads on a single shard that may be merging at once. Defaults to ``Math.max(1, Math.min(4, Runtime.getRuntime().availableProcessors() / 2))`` which works well for a good solid-state-disk (SSD). If your index is on spinning platter drives instead, decrease this to 1... highlight:: psql .. _ref-create-table-as: =================== ``CREATE TABLE AS`` =================== Define a new table from existing tables. .. rubric:: Table of contents .. contents:: :local: Synopsis ======== :: CREATE TABLE [ IF NOT EXISTS ] table_ident AS { ( query ) | query } Description =========== ``CREATE TABLE AS`` will create a new table and insert rows based on the specified ``query``. Only the column names, types, and the output rows will be used from the ``query``. Default values will be assigned to the optional parameters used for the table creation. For further details on the default values of the optional parameters, see :ref:`sql-create-table`. ``IF NOT EXISTS`` ================= If the optional ``IF NOT EXISTS`` clause is used, this statement won't do anything if the table exists already, and ``0`` rows will be returned. Parameters ========== :table_ident: The name (optionally schema-qualified) of the table to be created. :query: A query (``SELECT`` statement) that supplies the rows to be inserted. Refer to the ``SELECT`` statement for a description of the syntax... highlight:: psql .. _ref-create-foreign-table: ======================== ``CREATE FOREIGN TABLE`` ======================== Create a foreign table .. rubric:: Table of contents .. contents:: :local: Synopsis ======== .. code-block:: psql CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_ident ([ { column_name data_type } [, ... ] ]) SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ] Description =========== ``CREATE FOREIGN TABLE`` is DDL statement that creates a new foreign table. A foreign table is a view onto data in a foreign system. To create a foreign table you must first create a foreign server using :ref:`ref-create-server`. The name of the table must be unique, and distinct from the name of other relations like user tables or views. Foreign tables are listed in the ``information_schema.tables`` view and ``information_schema.foreign_tables``. You can use :ref:`ref-show-create-table` to view the definition of an existing foreign table. Creating a foreign table requires ``AL`` permission on schema or cluster level. A foreign table cannot be used in :ref:`sql-create-publication` for logical replication. Clauses ======= ``IF NOT EXISTS`` ----------------- Do not raise an error if the table already exists. ``OPTIONS`` ----------- :option value: Key value pairs defining foreign data wrapper specific options for the server .See :ref:`administration-fdw` for the foreign data wrapper specific options. .. seealso:: - :ref:`administration-fdw` - :ref:`ref-drop-foreign-table` - :ref:`ref-create-server`.. highlight:: psql .. _sql-alter-table: =============== ``ALTER TABLE`` =============== Alter an existing table. .. rubric:: Table of contents .. contents:: :local: .. _sql-alter-table-synopsis: Synopsis ======== :: ALTER [ BLOB ] TABLE { ONLY table_ident | table_ident [ PARTITION (partition_column = value [ , ... ]) ] } { SET ( parameter = value [ , ... ] ) | RESET ( parameter [ , ... ] ) | { ADD [ COLUMN ] column_name data_type [ column_constraint [ ... ] ] } [, ... ] | { DROP [ COLUMN ] [ IF EXISTS ] column_name } [, ... ] | { RENAME [ COLUMN ] column_name TO new_name } [, ... ] | OPEN | CLOSE | RENAME TO table_ident | REROUTE reroute_option | DROP CONSTRAINT constraint_name } where ``column_constraint`` is:: { PRIMARY KEY | NULL | NOT NULL | INDEX { OFF | USING { PLAIN | FULLTEXT [ WITH ( analyzer = analyzer_name ) ] } | [ CONSTRAINT constraint_name ] CHECK (boolean_expression) } .. _sql-alter-table-description: Description =========== ``ALTER TABLE`` can be used to modify an existing table definition. It provides options to add columns, modify constraints, enabling or disabling table parameters and allows to execute a shard :ref:`reroute allocation `. Use the ``BLOB`` keyword in order to alter a blob table (see :ref:`blob_support`). Blob tables cannot have custom columns which means that the ``ADD COLUMN`` keyword won't work. While altering a partitioned table, using ``ONLY`` will apply changes for the table *only* and not for any possible existing partitions. So these changes will only be applied to new partitions. The ``ONLY`` keyword cannot be used together with a `PARTITION`_ clause. See ``CREATE TABLE`` :ref:`sql-create-table-with` for a list of available parameters. :table_ident: The name (optionally schema-qualified) of the table to alter. .. _sql-alter-table-clauses: Clauses ======= .. _sql-alter-table-partition: ``PARTITION`` ------------- .. EDITORIAL NOTE ############## Multiple files (in this directory) use the same standard text for documenting the ``PARTITION`` clause. (Minor verb changes are made to accomodate the specifics of the parent statement.) For consistency, if you make changes here, please be sure to make a corresponding change to the other files. If the table is :ref:`partitioned `, the optional ``PARTITION`` clause can be used to alter one partition exclusively. :: [ PARTITION ( partition_column = value [ , ... ] ) ] :partition_column: One of the column names used for table partitioning. :value: The respective column value. All :ref:`partition columns ` (specified by the :ref:`sql-create-table-partitioned-by` clause) must be listed inside the parentheses along with their respective values using the ``partition_column = value`` syntax (separated by commas). Because each partition corresponds to a unique set of :ref:`partition column ` row values, this clause uniquely identifies a single partition to alter. .. TIP:: The :ref:`ref-show-create-table` statement will show you the complete list of partition columns specified by the :ref:`sql-create-table-partitioned-by` clause. .. NOTE:: BLOB tables cannot be partitioned and hence this clause cannot be used. .. SEEALSO:: :ref:`Partitioned tables: Alter ` .. _sql-alter-table-arguments: Arguments ========= .. _sql-alter-table-set-reset: ``SET/RESET`` ------------- Can be used to change a table parameter to a different value. Using ``RESET`` will reset the parameter to its default value. :parameter: The name of the parameter that is set to a new value or its default. The supported parameters are listed in the :ref:`CREATE TABLE WITH CLAUSE ` documentation. In addition to those, for dynamically changing the number of :ref:`allocated shards `, the parameter ``number_of_shards`` can be used. For more info on that, see :ref:`alter-shard-number`. .. _sql-alter-table-add-column: ``ADD COLUMN`` -------------- Can be used to add an additional column to a table. While columns can be added at any time, adding a new :ref:`generated column ` is only possible if the table is empty. In addition, adding a base column with :ref:`sql-create-table-default-clause` is not supported. It is possible to define a ``CHECK`` constraint with the restriction that only the column being added may be used in the :ref:`boolean expression `. :data_type: Data type of the column which should be added. :column_name: Name of the column which should be added. This can be a sub-column on an existing `OBJECT`. It's possible to add multiple columns at once. .. _sql-alter-table-drop-column: ``DROP COLUMN`` --------------- Can be used to drop a column from a table. :column_name: Name of the column which should be dropped. This can be a sub-column of an `OBJECT`. It's possible to drop multiple columns at once. .. NOTE:: It's not allowed to drop a column: - which is a :ref:`system column ` - which is part of a :ref:`PRIMARY KEY ` - used in :ref:`CLUSTERED BY column ` - used in :ref:`PARTITIONED BY ` - is a :ref:`named index` column - used in an :ref:`named index` - is referenced in a :ref:`generated column ` - is referenced in a :ref:`table level constraint with other columns ` .. NOTE:: It's not allowed to drop all columns of a table. .. NOTE:: Dropping columns of a table created before version 5.5 is not supported. .. _sql-alter-table-rename-column: ``RENAME COLUMN`` ----------------- Renames a column of a table :column_name: Name of the column to rename. Supports subscript expressions to rename sub-columns of ``OBJECT`` columns. :new_name: The new name of the column. .. NOTE:: Renaming columns of a table created before version 5.5 is not supported. .. _sql-alter-table-open-close: ``OPEN/CLOSE`` -------------- Can be used to open or close the table. Closing a table means that all operations, except ``ALTER TABLE ...``, will fail. Operations that fail will not return an error, but they will have no effect. Operations on tables containing closed partitions won't fail, but those operations will exclude all closed partitions. .. _sql-alter-table-rename-to: ``RENAME TO`` ------------- Can be used to rename a table or view, while maintaining its schema and data. If renaming a table, the shards of it become temporarily unavailable. .. _sql-alter-table-reroute: ``REROUTE`` ----------- The ``REROUTE`` command provides various options to manually control the :ref:`allocation of shards `. It allows the enforcement of explicit allocations, cancellations and the moving of shards between nodes in a cluster. See :ref:`ddl_reroute_shards` to get the convenient use-cases. The row count defines if the reroute or allocation process of a shard was acknowledged or rejected. .. NOTE:: Partitioned tables require a :ref:`sql-alter-table-partition` clause in order to specify a unique ``shard_id``. :: [ REROUTE reroute_option] where ``reroute_option`` is:: { MOVE SHARD shard_id FROM node TO node | ALLOCATE REPLICA SHARD shard_id ON node | PROMOTE REPLICA SHARD shard_id ON node [ WITH (accept_data_loss = { TRUE | FALSE }) ] | CANCEL SHARD shard_id ON node [ WITH (allow_primary = {TRUE|FALSE}) ] } :shard_id: The shard ID. Ranges from 0 up to the specified number of :ref:`sys-shards` shards of a table. :node: The ID or name of a node within the cluster. See :ref:`sys-nodes` how to gain the unique ID. ``REROUTE`` supports the following options to start/stop shard allocation: **MOVE** A started shard gets moved from one node to another. It requests a ``table_ident`` and a ``shard_id`` to identify the shard that receives the new allocation. Specify ``FROM node`` for the node to move the shard from and ``TO node`` to move the shard to. **ALLOCATE REPLICA** Allows to force allocation of an unassigned replica shard on a specific node. .. _alter-table-reroute-promote-replica: **PROMOTE REPLICA** Force promote a stale replica shard to a primary. In case a node holding a primary copy of a shard had a failure and the replica shards are out of sync, the system won't promote the replica to primary automatically, as it would result in a silent data loss. Ideally the node holding the primary copy of the shard would be brought back into the cluster, but if that is not possible due to a permanent system failure, it is possible to accept the potential data loss and force promote a stale replica using this command. The parameter ``accept_data_loss`` needs to be set to ``true`` in order for this command to work. If it is not provided or set to false, the command will error out. **CANCEL** This cancels the allocation or :ref:`recovery ` of a ``shard_id`` of a ``table_ident`` on a given ``node``. The ``allow_primary`` flag indicates if it is allowed to cancel the allocation of a primary shard. .. _sql-alter-drop-constraint: ``DROP CONSTRAINT`` ------------------- Removes a :ref:`check_constraint` constraint from a table. .. code-block:: sql ALTER TABLE table_ident DROP CONSTRAINT check_name :table_ident: The name (optionally schema-qualified) of the table. :check_name: The name of the check constraint to be removed. .. WARNING:: A removed CHECK constraints cannot be re-added to a table once dropped... highlight:: psql .. _sql-copy-from: ============= ``COPY FROM`` ============= You can use the ``COPY FROM`` :ref:`statement ` to copy data from a file into a table. .. SEEALSO:: :ref:`Data manipulation: Import and export ` :ref:`SQL syntax: COPY TO ` .. rubric:: Table of contents .. contents:: :local: :depth: 2 .. _sql-copy-from-synopsis: Synopsis ======== :: COPY table_identifier [ ( column_ident [, ...] ) ] [ PARTITION (partition_column = value [ , ... ]) ] FROM uri [ WITH ( option = value [, ...] ) ] [ RETURN SUMMARY ] .. _sql-copy-from-desc: Description =========== A ``COPY FROM`` copies data from a URI to the specified table. The nodes in the cluster will attempt to read the files available at the URI and import the data. Here's an example: :: cr> COPY quotes FROM 'file:///tmp/import_data/quotes.json'; COPY OK, 3 rows affected (... sec) .. NOTE:: The ``COPY`` statements use :ref:`Overload Protection ` to ensure other queries can still perform. Please change these settings during large inserts if needed. .. _sql-copy-from-formats: File formats ------------ CrateDB accepts both JSON and CSV inputs. The format is inferred from the file extension (``.json`` or ``.csv`` respectively) if possible. The :ref:`format ` can also be set as an option. If a format is not specified and the format cannot be inferred, the file will be processed as JSON. JSON files must contain a single JSON object per line and all files must be UTF-8 encoded. Also, any empty lines are skipped. Example JSON data:: {"id": 1, "quote": "Don't panic"} {"id": 2, "quote": "Ford, you're turning into a penguin. Stop it."} A CSV file may or may not contain a header. See :ref:`CSV header option ` for further details. Example CSV data:: id,quote 1,"Don't panic" 2,"Ford, you're turning into a penguin. Stop it." Example CSV data with no header:: 1,"Don't panic" 2,"Ford, you're turning into a penguin. Stop it." See also: :ref:`dml-importing-data`. .. _sql-copy-from-type-checks: Data type checks ---------------- CrateDB checks if the columns' data types match the types from the import file. It casts the types and will always import the data as in the source file. Furthermore CrateDB will check for all :ref:`column_constraints`. For example a `WKT`_ string cannot be imported into a column of ``geo_shape`` or ``geo_point`` type, since there is no implicit cast to the `GeoJSON`_ format. .. NOTE:: In case the ``COPY FROM`` statement fails, the log output on the node will provide an error message. Any data that has been imported until then has been written to the table and should be deleted before restarting the import. .. _sql-copy-from-params: Parameters ========== .. _sql-copy-from-table_ident: ``table_ident`` The name (optionally schema-qualified) of an existing table where the data should be put. .. _sql-copy-from-column_ident: ``column_ident`` Used in an optional columns declaration, each ``column_ident`` is the name of a column in the ``table_ident`` table. This currently only has an effect if using the CSV file format. See the ``header`` section for how it behaves. .. _sql-copy-from-uri: ``uri`` An expression or array of expressions. Each :ref:`expression ` must :ref:`evaluate ` to a string literal that is a `well-formed URI`_. URIs must use one of the supported :ref:`URI schemes `. CrateDB supports :ref:`globbing ` for the :ref:`file ` and :ref:`s3 ` URI schemes. .. NOTE:: If the URI scheme is missing, CrateDB assumes the value is a pathname and will prepend the :ref:`file ` URI scheme (i.e., ``file://``). So, for example, CrateDB will convert ``/tmp/file.json`` to ``file:///tmp/file.json``. .. _sql-copy-from-globbing: URI globbing ------------ With :ref:`file ` and :ref:`s3 ` URI schemes, you can use pathname `globbing`_ (i.e., ``*`` wildcards) with the ``COPY FROM`` statement to construct URIs that can match multiple directories and files. Suppose you used ``file:///tmp/import_data/*/*.json`` as the URI. This URI would match all JSON files located in subdirectories of the ``/tmp/import_data`` directory. So, for example, these files would match: - ``/tmp/import_data/foo/1.json`` - ``/tmp/import_data/bar/2.json`` - ``/tmp/import_data/1/boz.json`` .. CAUTION:: A file named ``/tmp/import_data/foo/.json`` would also match the ``file:///tmp/import_data/*/*.json`` URI. The ``*`` wildcard matches any number of characters, including none. However, these files would not match: - ``/tmp/import_data/1.json`` (two few subdirectories) - ``/tmp/import_data/foo/bar/2.json`` (too many subdirectories) - ``/tmp/import_data/1/boz.js`` (file extension mismatch) .. _sql-copy-from-schemes: URI schemes ----------- CrateDB supports the following URI schemes: .. contents:: :local: :depth: 1 .. _sql-copy-from-file: ``file`` '''''''' You can use the ``file://`` scheme to specify an absolute path to one or more files accessible via the local filesystem of one or more CrateDB nodes. For example: .. code-block:: text file:///path/to/dir The files must be accessible on at least one node and the system user running the ``crate`` process must have read access to every file specified. Additionally, only the ``crate`` superuser is allowed to use the ``file://`` scheme. By default, every node will attempt to import every file. If the file is accessible on multiple nodes, you can set the :ref:`shared ` option to true in order to avoid importing duplicates. Use :ref:`sql-copy-from-return-summary` to get information about what actions were performed on each node. .. TIP:: If you are running CrateDB inside a container, the file must be inside the container. If you are using *Docker*, you may have to configure a `Docker volume`_ to accomplish this. .. TIP:: If you are using *Microsoft Windows*, you must include the drive letter in the file URI. For example: .. code-block:: text file://C:\/tmp/import_data/quotes.json Consult the `Windows documentation`_ for more information. .. _sql-copy-from-s3: ``s3`` '''''' You can use the ``s3://`` scheme to access buckets on the `Amazon Simple Storage Service`_ (Amazon S3). For example: .. code-block:: text s3://[:@][:/]/ S3 compatible storage providers can be specified by the optional pair of host and port, which defaults to Amazon S3 if not provided. Here is a more concrete example: .. code-block:: text COPY t FROM 's3://accessKey:secretKey@s3.amazonaws.com:443/myBucket/key/a.json' with (protocol = 'https') If no credentials are set the s3 client will operate in anonymous mode. See `AWS Java Documentation`_. Using the ``s3://`` scheme automatically sets the :ref:`shared ` to true. .. TIP:: A ``secretkey`` provided by Amazon Web Services can contain characters such as '/', '+' or '='. These characters must be `URL encoded`_. For a detailed explanation read the official `AWS documentation`_. To escape a secret key, you can use a snippet like this: .. code-block:: console sh$ python -c "from getpass import getpass; from urllib.parse import quote_plus; print(quote_plus(getpass('secret_key: ')))" This will prompt for the secret key and print the encoded variant. Additionally, versions prior to 0.51.x use HTTP for connections to S3. Since 0.51.x these connections are using the HTTPS protocol. Please make sure you update your firewall rules to allow outgoing connections on port ``443``. .. _sql-copy-from-az: ``az`` '''''' You can use the ``az://`` scheme to access files on the `Azure Blob Storage`_. URI must look like ``az:://.//``. For example: .. code-block:: text az://myaccount.blob.core.windows.net/my-container/dir1/dir2/file1.json One of the authentication parameters (:ref:`sql-copy-from-key` or :ref:`sql-copy-from-sas-token`) must be provided in the ``WITH`` clause. Protocol can be provided in the ``WITH`` clause, otherwise ``https`` is used by default. For example: .. code-block:: text COPY t FROM 'az://myaccount.blob.core.windows.net/my-container/dir1/dir2/file1.json' WITH ( key = 'key' ) Using the ``az://`` scheme automatically sets the :ref:`shared ` to ``true``. .. _sql-copy-from-other-schemes: Other schemes ''''''''''''' In addition to the schemes above, CrateDB supports all protocols supported by the `URL`_ implementation of its JVM (typically ``http``, ``https``, ``ftp``, and ``jar``). Please refer to the documentation of the JVM vendor for an accurate list of supported protocols. .. NOTE:: These schemes *do not* support wildcard expansion. .. _sql-copy-from-clauses: Clauses ======= The ``COPY FROM`` :ref:`statement ` supports the following clauses: .. contents:: :local: :depth: 1 .. _sql-copy-from-partition: ``PARTITION`` ------------- .. EDITORIAL NOTE ############## Multiple files (in this directory) use the same standard text for documenting the ``PARTITION`` clause. (Minor verb changes are made to accomodate the specifics of the parent statement.) For consistency, if you make changes here, please be sure to make a corresponding change to the other files. If the table is :ref:`partitioned `, the optional ``PARTITION`` clause can be used to import data into one partition exclusively. :: [ PARTITION ( partition_column = value [ , ... ] ) ] :partition_column: One of the column names used for table partitioning :value: The respective column value. All :ref:`partition columns ` (specified by the :ref:`sql-create-table-partitioned-by` clause) must be listed inside the parentheses along with their respective values using the ``partition_column = value`` syntax (separated by commas). Because each partition corresponds to a unique set of :ref:`partition column ` row values, this clause uniquely identifies a single partition for import. .. TIP:: The :ref:`ref-show-create-table` statement will show you the complete list of partition columns specified by the :ref:`sql-create-table-partitioned-by` clause. .. CAUTION:: Partitioned tables do not store the row values for the partition columns, hence every row will be imported into the specified partition regardless of partition column values. .. _sql-copy-from-with: ``WITH`` -------- You can use the optional ``WITH`` clause to specify option values. :: [ WITH ( option = value [, ...] ) ] The ``WITH`` clause supports the following options: .. contents:: :local: :depth: 1 .. _sql-copy-from-bulk_size: **bulk_size** | *Type:* ``integer`` | *Default:* ``10000`` | *Optional* CrateDB will process the lines it reads from the ``path`` in bulks. This option specifies the size of one batch. The provided value must be greater than 0. .. _sql-copy-from-fail_fast: **fail_fast** | *Type:* ``boolean`` | *Default:* ``false`` | *Optional* A boolean value indicating if the ``COPY FROM`` operation should abort early after an error. This is best effort and due to the distributed execution, it may continue processing some records before it aborts. .. _sql-copy-from-wait_for_completion: **wait_for_completion** | *Type:* ``boolean`` | *Default:* ``true`` | *Optional* A boolean value indicating if the ``COPY FROM`` should wait for the copy operation to complete. If set to ``false`` the request returns at once and the copy operation runs in the background. .. _sql-copy-from-shared: **shared** | *Type:* ``boolean`` | *Default:* Depends on the scheme of each URI. | *Optional* This option should be set to true if the URIs location is accessible by more than one CrateDB node to prevent them from importing the same file. If an array of URIs is passed to ``COPY FROM`` this option will overwrite the default for *all* URIs. .. _sql-copy-from-node_filters: **node_filters** | *Type:* ``text`` | *Optional* A filter :ref:`expression ` to select the nodes to run the *read* operation. It's an object in the form of:: { name = '', id = '' } Only one of the keys is required. The ``name`` :ref:`regular expression ` is applied on the ``name`` of all execution nodes, whereas the ``id`` regex is applied on the ``node id``. If both keys are set, *both* regular expressions have to match for a node to be included. If the :ref:`shared ` option is false, a strict node filter might exclude nodes with access to the data leading to a partial import. To verify which nodes match the filter, run the statement with :ref:`EXPLAIN `. .. _sql-copy-from-num_readers: **num_readers** | *Type:* ``integer`` | *Default:* Number of nodes available in the cluster. | *Optional* The number of nodes that will read the resources specified in the URI. If the option is set to a number greater than the number of available nodes it will still use each node only once to do the import. However, the value must be an integer greater than 0. If :ref:`shared ` is set to false this option has to be used with caution. It might exclude the wrong nodes, causing COPY FROM to read no files or only a subset of the files. .. _sql-copy-from-compression: **compression** | *Type:* ``text`` | *Values:* ``gzip`` | *Default:* By default the output is not compressed. | *Optional* Define if and how the exported data should be compressed. .. _sql-copy-from-protocol: **protocol** | *Type:* ``text`` | *Values:* ``http``, ``https`` | *Default:* ``https`` | *Optional* Protocol to use. Used for :ref:`s3 ` and :ref:`az ` schemes only. .. _sql-copy-from-overwrite_duplicates: **overwrite_duplicates** | *Type:* ``boolean`` | *Default:* ``false`` | *Optional* ``COPY FROM`` by default won't overwrite rows if a document with the same primary key already exists. Set to true to overwrite duplicate rows. .. _sql-copy-from-empty_string_as_null: **empty_string_as_null** | *Type:* ``boolean`` | *Default:* ``false`` | *Optional* If set to ``true`` the ``empty_string_as_null`` option enables conversion of empty strings into ``NULL``. The option is only supported when using the ``CSV`` format, otherwise, it will be ignored. .. _sql-copy-from-delimiter: **delimiter** | *Type:* ``text`` | *Default:* ``,`` | *Optional* Specifies a single one-byte character that separates columns within each line of the file. The option is only supported when using the ``CSV`` format, otherwise, it will be ignored. .. _sql-copy-from-format: **format** | *Type:* ``text`` | *Values:* ``csv``, ``json`` | *Default:* ``json`` | *Optional* This option specifies the format of the input file. Available formats are ``csv`` or ``json``. If a format is not specified and the format cannot be guessed from the file extension, the file will be processed as JSON. .. _sql-copy-from-header: **header** | *Type:* ``boolean`` | *Default:* ``true`` | *Optional* Used to indicate if the first line of a CSV file contains a header with the column names. If set to ``false``, the CSV must not contain column names in the first line and instead the columns declared in the statement are used. If no columns are declared in the statement, it will default to all columns present in the table in their ``CREATE TABLE`` declaration order. If set to ``true`` the first line in the CSV file must contain the column names. You can use the optional column declaration in addition to import only a subset of the data. If the statement contains no column declarations, all fields in the CSV are read and if it contains fields where there is no matching column in the table, the behavior depends on the ``column_policy`` table setting. If ``dynamic`` it implicitly adds new columns, if ``strict`` the operation will fail. An example of using input file with no header :: cr> COPY quotes FROM 'file:///tmp/import_data/quotes.csv' with (format='csv', header=false); COPY OK, 3 rows affected (... sec) .. _sql-copy-from-skip: **skip** | *Type:* ``integer`` | *Default:* ``0`` | *Optional* Setting this option to ``n`` skips the first ``n`` rows while copying. .. NOTE:: CrateDB by default expects a header in CSV files. If you're using the SKIP option to skip the header, you have to set ``header = false`` as well. See :ref:`header `. .. _sql-copy-from-key: **key** | *Type:* ``text`` | *Optional* Used for :ref:`az ` scheme only. The Azure Storage `Account Key`_. .. NOTE:: It must be provided if :ref:`sql-copy-from-sas-token` is not provided. .. _sql-copy-from-sas-token: **sas_token** | *Type:* ``text`` | *Optional* Used for :ref:`az ` scheme only. The Shared Access Signatures (`SAS`_) token used for authentication for the Azure Storage account. This can be used as an alternative to the The Azure Storage `Account Key`_. The SAS token must have read, write, and list permissions for the container base path and all its contents. These permissions need to be granted for the blob service and apply to resource types service, container, and object. .. NOTE:: It must be provided if :ref:`sql-copy-from-key` is not provided. .. _sql-copy-from-return-summary: ``RETURN SUMMARY`` ------------------ By using the optional ``RETURN SUMMARY`` clause, a per-node result set will be returned containing information about possible failures and successfully inserted records. :: [ RETURN SUMMARY ] +---------------------------------------+------------------------------------------------+---------------+ | Column Name | Description | Return Type | +=======================================+================================================+===============+ | ``node`` | Information about the node that has processed | ``OBJECT`` | | | the URI resource. | | +---------------------------------------+------------------------------------------------+---------------+ | ``node['id']`` | The id of the node. | ``TEXT`` | +---------------------------------------+------------------------------------------------+---------------+ | ``node['name']`` | The name of the node. | ``TEXT`` | +---------------------------------------+------------------------------------------------+---------------+ | ``uri`` | The URI the node has processed. | ``TEXT`` | +---------------------------------------+------------------------------------------------+---------------+ | ``error_count`` | The total number of records which failed. | ``BIGINT`` | | | A NULL value indicates a general URI reading | | | | error, the error will be listed inside the | | | | ``errors`` column. | | +---------------------------------------+------------------------------------------------+---------------+ | ``success_count`` | The total number of records which were | ``BIGINT`` | | | inserted. | | | | A NULL value indicates a general URI reading | | | | error, the error will be listed inside the | | | | ``errors`` column. | | +---------------------------------------+------------------------------------------------+---------------+ | ``errors`` | Contains detailed information about all | ``OBJECT`` | | | errors. Limited to at most 25 error messages. | | +---------------------------------------+------------------------------------------------+---------------+ | ``errors[ERROR_MSG]`` | Contains information about a type of an error. | ``OBJECT`` | +---------------------------------------+------------------------------------------------+---------------+ | ``errors[ERROR_MSG]['count']`` | The number records failed with this error. | ``BIGINT`` | +---------------------------------------+------------------------------------------------+---------------+ | ``errors[ERROR_MSG]['line_numbers']`` | The line numbers of the source URI where the | ``ARRAY`` | | | error occurred, limited to the first 50 | | | | errors, to avoid buffer pressure on clients. | | +---------------------------------------+------------------------------------------------+---------------+ .. _Amazon Simple Storage Service: https://aws.amazon.com/s3/ .. _AWS documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html .. _AWS Java Documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingAcctOrUserCredJava.html .. _Azure Blob Storage: https://learn.microsoft.com/en-us/azure/storage/blobs/ .. _Account Key: https://learn.microsoft.com/en-us/purview/sit-defn-azure-storage-account-key-generic#format .. _SAS: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview .. _Docker volume: https://docs.docker.com/storage/volumes/ .. _GeoJSON: https://geojson.org/ .. _globbing: https://en.wikipedia.org/wiki/Glob_(programming) .. _percent-encoding: https://en.wikipedia.org/wiki/Percent-encoding .. _URI Scheme: https://en.wikipedia.org/wiki/URI_scheme .. _URL encoded: https://en.wikipedia.org/wiki/Percent-encoding .. _URL: https://docs.oracle.com/javase/8/docs/api/java/net/URL.html .. _well-formed URI: https://www.rfc-editor.org/rfc/rfc2396 .. _Windows documentation: https://docs.microsoft.com/en-us/dotnet/standard/io/file-path-formats .. _WKT: https://en.wikipedia.org/wiki/Well-known_text.. highlight:: psql .. _sql-copy-to: =========== ``COPY TO`` =========== You can use the ``COPY TO`` :ref:`statement ` to export table data to a file. .. SEEALSO:: :ref:`Data manipulation: Import and export ` :ref:`SQL syntax: COPY FROM ` .. rubric:: Table of contents .. contents:: :local: :depth: 2 .. _sql-copy-to-synopsis: Synopsis ======== :: COPY table_ident [ PARTITION ( partition_column = value [ , ... ] ) ] [ ( column [ , ...] ) ] [ WHERE condition ] TO DIRECTORY output_uri [ WITH ( copy_parameter [= value] [, ... ] ) ] .. _sql-copy-to-desc: Description =========== The ``COPY TO`` command exports the contents of a table to one or more files into a given directory with unique filenames. Each node with at least one shard of the table will export its contents onto their local disk. The created files are JSON formatted and contain one table row per line and, due to the distributed nature of CrateDB, *will remain on the same nodes* *where the shards are*. Here's an example: :: cr> COPY quotes TO DIRECTORY '/tmp/' with (compression='gzip'); COPY OK, 3 rows affected ... .. NOTE:: Currently only user tables can be exported. System tables like ``sys.nodes`` and blob tables don't work with the ``COPY TO`` statement. The ``COPY`` statements use :ref:`Overload Protection ` to ensure other queries can still perform. Please change these settings during large inserts if needed. .. _sql-copy-to-params: Parameters ========== .. _sql-copy-to-table_ident: ``table_ident`` The name (optionally schema-qualified) of the table to be exported. .. _sql-copy-to-column: ``column`` (optional) A list of column :ref:`expressions ` that should be exported. E.g. :: cr> COPY quotes (quote, author) TO DIRECTORY '/tmp/'; COPY OK, 3 rows affected ... .. NOTE:: When declaring columns, this changes the output to JSON list format, which is currently not supported by the ``COPY FROM`` statement. .. _sql-copy-to-clauses: Clauses ======= .. _sql-copy-to-partition: ``PARTITION`` ------------- .. EDITORIAL NOTE ############## Multiple files (in this directory) use the same standard text for documenting the ``PARTITION`` clause. (Minor verb changes are made to accomodate the specifics of the parent statement.) For consistency, if you make changes here, please be sure to make a corresponding change to the other files. If the table is :ref:`partitioned `, the optional ``PARTITION`` clause can be used to export data from a one partition exclusively. :: [ PARTITION ( partition_column = value [ , ... ] ) ] :partition_column: One of the column names used for table partitioning. :value: The respective column value. All :ref:`partition columns ` (specified by the :ref:`sql-create-table-partitioned-by` clause) must be listed inside the parentheses along with their respective values using the ``partition_column = value`` syntax (separated by commas). Because each partition corresponds to a unique set of :ref:`partition column ` row values, this clause uniquely identifies a single partition to export. .. TIP:: The :ref:`ref-show-create-table` statement will show you the complete list of partition columns specified by the :ref:`sql-create-table-partitioned-by` clause. .. _sql-copy-to-where: ``WHERE`` --------- The ``WHERE`` clauses use the same syntax as ``SELECT`` statements, allowing partial exports. (see :ref:`sql_dql_where_clause` for more information). Example of using ``WHERE`` clause with :ref:`comparison operators ` for partial export: :: cr> COPY quotes WHERE category = 'philosophy' TO DIRECTORY '/tmp/'; COPY OK, 3 rows affected ... .. _sql-copy-to-to: ``TO`` ------ The ``TO`` clause allows you to specify an output location. :: TO DIRECTORY output_uri .. _sql-copy-to-to-params: Parameters '''''''''' ``output_uri`` An :ref:`expression ` must :ref:`evaluate ` to a string literal that is a `well-formed URI`_. URIs must use one of the supported :ref:`URI schemes `. .. NOTE:: If the URI scheme is missing, CrateDB assumes the value is a pathname and will prepend the :ref:`file ` URI scheme (i.e., ``file://``). So, for example, CrateDB will convert ``/tmp/file.json`` to ``file:///tmp/file.json``. .. _sql-copy-to-schemes: URI schemes ----------- CrateDB supports the following URI schemes: .. contents:: :local: :depth: 1 .. _sql-copy-to-file: ``file`` '''''''' You can use the ``file://`` scheme to specify an absolute path to an output location on the local file system. For example: .. code-block:: text file:///path/to/dir .. TIP:: If you are running CrateDB inside a container, the location must be inside the container. If you are using *Docker*, you may have to configure a `Docker volume`_ to accomplish this. .. TIP:: If you are using *Microsoft Windows*, you must include the drive letter in the file URI. For example: .. code-block:: text file://C:\/tmp/import_data/quotes.json Consult the `Windows documentation`_ for more information. .. _sql-copy-to-s3: ``s3`` '''''' You can use the ``s3://`` scheme to access buckets on the `Amazon Simple Storage Service`_ (Amazon S3). For example: .. code-block:: text s3://[:@][:/]/ S3 compatible storage providers can be specified by the optional pair of host and port, which defaults to Amazon S3 if not provided. Here is a more concrete example: .. code-block:: text COPY t TO DIRECTORY 's3://myAccessKey:mySecretKey@s3.amazonaws.com:80/myBucket/key1' with (protocol = 'http') If no credentials are set the s3 client will operate in anonymous mode. See `AWS Java Documentation`_. .. TIP:: A ``secretkey`` provided by Amazon Web Services can contain characters such as '/', '+' or '='. These characters must be `URL encoded`_. For a detailed explanation read the official `AWS documentation`_. To escape a secret key, you can use a snippet like this: .. code-block:: console sh$ python -c "from getpass import getpass; from urllib.parse import quote_plus; print(quote_plus(getpass('secret_key: ')))" This will prompt for the secret key and print the encoded variant. Additionally, versions prior to 0.51.x use HTTP for connections to S3. Since 0.51.x these connections are using the HTTPS protocol. Please make sure you update your firewall rules to allow outgoing connections on port ``443``. .. _sql-copy-to-az: ``az`` '''''' You can use the ``az://`` scheme to access files on the `Azure Blob Storage`_. URI must look like ``az:://.//``. For example: .. code-block:: text az://myaccount.blob.core.windows.net/my-container/dir1/dir2/file1.json One of the authentication parameters (:ref:`sql-copy-to-key` or :ref:`sql-copy-to-sas-token`) must be provided in the ``WITH`` clause. Protocol can be provided in the ``WITH`` clause, otherwise ``https`` is used by default. For example: .. code-block:: text COPY source TO DIRECTORY 'az://myaccount.blob.core.windows.net/my-container/dir1/dir2/file1.json' WITH ( key = 'key' ) .. _sql-copy-to-with: ``WITH`` -------- You can use the optional ``WITH`` clause to specify copy parameter values. :: [ WITH ( copy_parameter [= value] [, ... ] ) ] The ``WITH`` clause supports the following copy parameters: .. contents:: :local: :depth: 1 .. _sql-copy-to-compression: **compression** | *Type:* ``text`` | *Values:* ``gzip`` | *Default:* By default the output is not compressed. | *Optional* Define if and how the exported data should be compressed. .. _sql-copy-to-protocol: **protocol** | *Type:* ``text`` | *Values:* ``http``, ``https`` | *Default:* ``https`` | *Optional* Protocol to use. Used only by the :ref:`s3 ` and :ref:`az ` schemes. .. _sql-copy-to-format: **format** | *Type:* ``text`` | *Values:* ``json_object``, ``json_array`` | *Default:* Depends on defined columns. See description below. | *Optional* Possible values for the ``format`` settings are: ``json_object`` Each row in the result set is serialized as JSON object and written to an output file where one line contains one object. This is the default behavior if no columns are defined. Use this format to import with :ref:`COPY FROM `. ``json_array`` Each row in the result set is serialized as JSON array, storing one array per line in an output file. This is the default behavior if columns are defined. .. _sql-copy-to-wait_for_completion: **wait_for_completion** | *Type:* ``boolean`` | *Default:* ``true`` | *Optional* A boolean value indicating if the ``COPY TO`` should wait for the copy operation to complete. If set to ``false`` the request returns at once and the copy operation runs in the background. .. _sql-copy-to-key: **key** | *Type:* ``text`` | *Optional* Used for :ref:`azblob ` scheme only. The Azure Storage `Account Key`_. .. NOTE:: It must be provided if :ref:`sql-copy-to-sas-token` is not provided. .. _sql-copy-to-sas-token: **sas_token** | *Type:* ``text`` | *Optional* Used for :ref:`azblob ` scheme only. The Shared Access Signatures (`SAS`_) token used for authentication for the Azure Storage account. This can be used as an alternative to the The Azure Storage `Account Key`_. The SAS token must have read, write, and list permissions for the container base path and all its contents. These permissions need to be granted for the blob service and apply to resource types service, container, and object. .. NOTE:: It must be provided if :ref:`sql-copy-to-key` is not provided. .. _Amazon S3: https://aws.amazon.com/s3/ .. _Amazon Simple Storage Service: https://aws.amazon.com/s3/ .. _AWS documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html .. _AWS Java Documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingAcctOrUserCredJava.html .. _Azure Blob Storage: https://learn.microsoft.com/en-us/azure/storage/blobs/ .. _SAS: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview .. _Account Key: https://learn.microsoft.com/en-us/purview/sit-defn-azure-storage-account-key-generic#format .. _Docker volume: https://docs.docker.com/storage/volumes/ .. _gzip: https://www.gzip.org/ .. _NFS: https://en.wikipedia.org/wiki/Network_File_System .. _URL encoded: https://en.wikipedia.org/wiki/Percent-encoding .. _well-formed URI: https://www.rfc-editor.org/rfc/rfc2396 .. _Windows documentation: https://docs.microsoft.com/en-us/dotnet/standard/io/file-path-formats.. highlight:: psql .. _data-types: ========== Data types ========== Data can be stored in different formats. CrateDB has different types that can be specified if a table is created using the :ref:`sql-create-table` statement. Data types play a central role as they limit what kind of data can be inserted and how it is stored. They also influence the behaviour when the records are queried. Data type names are reserved words and need to be escaped when used as column names. .. rubric:: Table of contents .. contents:: :local: :depth: 2 .. _data-types-overview: Overview ======== .. _data-types-examples: Supported types --------------- CrateDB supports the following data types. Scroll down for more details. .. list-table:: :header-rows: 1 :widths: 10 20 20 :align: left * - Type - Description - Example * - ``BOOLEAN`` - A boolean value - ``true`` or ``false`` * - ``VARCHAR(n)`` and ``TEXT`` - A string of Unicode characters - ``'foobar'`` * - ``CHARACTER(n)`` and ``CHAR(n)`` - A fixed-length, blank padded string of Unicode characters - ``'foobar'`` * - ``SMALLINT``, ``INTEGER`` and ``BIGINT`` - A signed integer value - ``12345`` or ``-12345`` * - ``REAL`` - An inexact `single-precision floating-point`_ value. - ``3.4028235e+38`` * - ``DOUBLE PRECISION`` - An inexact `double-precision floating-point`_ value. - ``1.7976931348623157e+308`` * - ``NUMERIC(precision, scale)`` - An exact `fixed-point fractional number`_ with an arbitrary, user-specified precision. - ``123.45`` * - ``TIMESTAMP WITH TIME ZONE`` - Time and date with time zone - ``'1970-01-02T00:00:00+01:00'`` * - ``TIMESTAMP WITHOUT TIME ZONE`` - Time and date without time zone - ``'1970-01-02T00:00:00'`` * - ``DATE`` - A specific year, month and a day in UTC. - ``'2021-03-09'`` * - ``TIME`` - A specific time as the number of milliseconds since midnight along with an optional time zone offset - ``'13:00:00'`` or ``'13:00:00+01:00'`` * - ``BIT(n)`` - A bit sequence - ``B'00010010'`` * - ``IP`` - An IP address (IPv4 or IPv6) - ``'127.0.0.1'`` or ``'0:0:0:0:0:ffff:c0a8:64'`` * - ``OBJECT`` - Express an object - :: { "foo" = 'bar', "baz" = 'qux' } * - ``ARRAY`` - Express an array - :: [ {"name" = 'Alice', "age" = 33}, {"name" = 'Bob', "age" = 45} ] * - ``GEO_POINT`` - A geographic data type comprised of a pair of coordinates (latitude and longitude) - ``[13.46738, 52.50463]`` or ``POINT( 13.46738 52.50463 )`` * - ``GEO_SHAPE`` - Express arbitrary `GeoJSON geometry objects`_ - ``[13.46738, 52.50463]`` or ``POINT( 13.46738 52.50463 )`` :: { type = 'Polygon', coordinates = [ [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ] ] } or:: 'POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))' * - ``float_vector(n)`` - A fixed length vector of floating point numbers - ``[3.14, 42.21]`` * - ``ROW`` - A composite type made up of a number of inner types/fields. Similar to a ``tuple`` in other languages. - No literal support yet. Result format depends on the used protocol. HTTP uses a list. PostgreSQL serializes it via the ``record`` type (``oid`` 2249). .. _data-types-ranges-widths: Ranges and widths ----------------- This section lists all data types supported by CrateDB at a glance in tabular form, including some facts about their byte widths, value ranges and properties. Please note that the byte widths do not equal the total storage sizes, which are likely to be larger due to additional metadata. .. list-table:: :header-rows: 1 :widths: 15 10 30 20 :align: left * - Type - Width - Range - Description * - ``BOOLEAN`` - 1 byte - ``true`` or ``false`` - Boolean type * - ``VARCHAR(n)`` - variable - Minimum length: 1. Maximum length: 2^31-1 (upper :ref:`integer ` range). [#f1]_ - Strings of variable length. All Unicode characters are allowed. * - ``TEXT`` - variable - Minimum length: 1. Maximum length: 2^31-1 (upper :ref:`integer ` range). [#f1]_ - Strings of variable length. All Unicode characters are allowed. * - ``CHARACTER(n)``, ``CHAR(n)`` - variable - Minimum length: 1. Maximum length: 2^31-1 (upper :ref:`integer ` range). [#f1]_ - Strings of fixed length, blank padded. All Unicode characters are allowed. * - ``SMALLINT`` - 2 bytes - -32,768 to 32,767 - Small-range integer * - ``INTEGER`` - 4 bytes - -2^31 to 2^31-1 - Typical choice for integer * - ``BIGINT`` - 8 bytes - -2^63 to 2^63-1 - Large-range integer * - ``NUMERIC`` - variable - Up to 131072 digits before, and up to 16383 digits after the decimal point - user-specified precision, exact * - ``REAL`` - 4 bytes - 6 decimal digits precision - Inexact, variable-precision * - ``DOUBLE PRECISION`` - 8 bytes - 15 decimal digits precision - Inexact, variable-precision * - ``TIMESTAMP WITH TIME ZONE`` - 8 bytes - 292275054BC to 292278993AD - Time and date with time zone * - ``TIMESTAMP WITHOUT TIME ZONE`` - 8 bytes - 292275054BC to 292278993AD - Time and date without time zone * - ``DATE`` - 8 bytes - 292275054BC to 292278993AD - Date in UTC. Internally stored as ``BIGINT``. * - ``TIME WITH TIME ZONE`` - 12 bytes - 292275054BC to 292278993AD - 00:00:00.000000 to 23:59:59.999999 zone: -18:00 to 18:00 * - ``BIT(n)`` - variable - A sequence of ``0`` or ``1`` digits. Minimum length: 1. Maximum length: 2^31-1 (upper :ref:`integer ` range). - A string representation of a bit sequence. * - ``IP`` - 8 bytes - IP addresses are stored as ``BIGINT`` values. - A string representation of an IP address (IPv4 or IPv6). * - ``OBJECT`` - variable - The theoretical maximum length (number of key/value pairs) is slightly below Java's ``Integer.MAX_VALUE``. - An object is structured as a collection of key-values, containing any other type, including further child objects. * - ``ARRAY`` - variable - The theoretical maximum length (number of elements) is slightly below Java's ``Integer.MAX_VALUE``. - An array is structured as a sequence of any other type. * - ``GEO_POINT`` - 16 bytes - Each coordinate is stored as a ``DOUBLE PRECISION`` type. - A ``GEO_POINT`` is a geographic data type used to store latitude and longitude coordinates. * - ``GEO_SHAPE`` - variable - Each coordinate is stored as a ``DOUBLE PRECISION`` type. - A ``GEO_SHAPE`` column can store different kinds of `GeoJSON geometry objects`_. * - ``FLOAT_VECTOR(n)`` - ``n`` - Vector Minimum length: 1. Maximum length: 2048. - A vector of floating point numbers. .. rubric:: Footnotes .. [#f1] Using the :ref:`Column Store ` limits the values of text columns to a maximum length of 32766 bytes. You can relax that limitation by either defining a column to not use the column store or by :ref:`turning off indexing `. Precedence and type conversion ------------------------------ When expressions of different data types are combined by operators or scalars, the data type with the lower precedence is converted to the data type with the higher precedence. If an implicit conversion between the types isn't supported, an error is returned. The following precedence order is used for data types (highest to lowest): 1. Custom (complex) types (currently: :ref:`bitstring `, :ref:`float_vector `) (highest) 2. :ref:`GEO_SHAPE ` 3. :ref:`JSON ` 4. :ref:`OBJECT ` 5. :ref:`GEO_POINT ` 6. ``Record`` (internal type, return type of :ref:`table functions `) 7. :ref:`Array ` 8. :ref:`Numeric ` 9. :ref:`Double precision ` 10. :ref:`Real ` 11. :ref:`IP ` 12. :ref:`Bigint ` 13. :ref:`Timestamp without time zone ` 14. :ref:`Timestamp with time zone ` 15. :ref:`Date ` 16. :ref:`Interval ` 17. :ref:`Regclass ` 18. :ref:`Regproc ` 19. :ref:`Integer ` 20. :ref:`Time with time zone ` 21. :ref:`Smallint ` 22. :ref:`Boolean ` 23. :ref:`"Char" ` 24. :ref:`Text ` 25. :ref:`Character ` 26. :ref:`NULL ` (lowest) .. _data-types-primitive: Primitive types =============== Primitive types are types with :ref:`scalar ` values: .. contents:: :local: :depth: 2 .. _data-types-nulls: Null values ----------- .. _type-null: ``NULL`` '''''''' A ``NULL`` represents a missing value. .. NOTE:: ``NULL`` values are not the same as ``0``, an empty string (``''``), an empty object (``{}``), an empty array (``[]``), or any other kind of empty or zeroed data type. You can use ``NULL`` values when inserting records to indicate the absence of a data point when the value for a specific column is not known. Similarly, CrateDB will produce ``NULL`` values when, for example, data is missing from an :ref:`outer left-join ` operation. This happens when a row from one relation has no corresponding row in the joined relation. If you insert a record without specifying the value for a particular column, CrateDB will insert a ``NULL`` value for that column. For example:: cr> CREATE TABLE users ( ... first_name TEXT, ... surname TEXT ... ); CREATE OK, 1 row affected (... sec) Insert a record without specifying ``surname``:: cr> INSERT INTO users ( ... first_name ... ) VALUES ( ... 'Alice' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) The resulting row will have a ``NULL`` value for ``surname``:: cr> SELECT ... first_name, ... surname ... FROM users ... WHERE first_name = 'Alice'; +------------+---------+ | first_name | surname | +------------+---------+ | Alice | NULL | +------------+---------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE users; DROP OK, 1 row affected (... sec) You can prevent ``NULL`` values being inserted altogether with a :ref:`NOT NULL constraint `, like so:: cr> CREATE TABLE users_with_surnames ( ... first_name TEXT, ... surname TEXT NOT NULL ... ); CREATE OK, 1 row affected (... sec) Now, when you try to insert a user without a surname, it will produce an error:: cr> INSERT INTO users_with_surnames ( ... first_name ... ) VALUES ( ... 'Alice' ... ); SQLParseException["surname" must not be null] .. HIDE: cr> DROP TABLE users_with_surnames; DROP OK, 1 row affected (... sec) .. _data-types-boolean-values: Boolean values -------------- .. _type-boolean: ``BOOLEAN`` ''''''''''' A basic boolean type accepting ``true`` and ``false`` as values. Example:: cr> CREATE TABLE my_table ( ... first_column BOOLEAN ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... first_column ... ) VALUES ( ... true ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * FROM my_table; +--------------+ | first_column | +--------------+ | TRUE | +--------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _data-types-character-data: Character data -------------- Character types are general purpose strings of character data. CrateDB supports the following character types: .. contents:: :local: :depth: 1 .. NOTE:: Only character data types without specified length can be :ref:`analyzed for full text search `. By default, the :ref:`plain ` analyzer is used. .. _type-varchar: .. _data-type-varchar: ``VARCHAR(n)`` '''''''''''''' The ``VARCHAR(n)`` (or ``CHARACTER VARYING(n)``) type represents variable length strings. All Unicode characters are allowed. The optional length specification ``n`` is a positive :ref:`integer ` that defines the maximum length, in characters, of the values that have to be stored or cast. The minimum length is ``1``. The maximum length is defined by the upper :ref:`integer ` range. An attempt to store a string literal that exceeds the specified length of the character data type results in an error. :: cr> CREATE TABLE users ( ... id VARCHAR, ... name VARCHAR(3) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Alice Smith' ... ); SQLParseException['Alice Smith' is too long for the text type of length: 3] If the excess characters are all spaces, the string literal will be truncated to the specified length. :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Bob ' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... id, ... name, ... char_length(name) AS name_length ... FROM users; +----+------+-------------+ | id | name | name_length | +----+------+-------------+ | 1 | Bob | 3 | +----+------+-------------+ SELECT 1 row in set (... sec) If a value is explicitly cast to ``VARCHAR(n)``, then an over-length value will be truncated to ``n`` characters without raising an error. :: cr> SELECT 'Alice Smith'::VARCHAR(5) AS name; +-------+ | name | +-------+ | Alice | +-------+ SELECT 1 row in set (... sec) ``CHARACTER VARYING`` and ``VARCHAR`` without the length specifier are aliases for the :ref:`text ` data type, see also :ref:`type aliases `. .. HIDE: cr> DROP TABLE users; DROP OK, 1 row affected (... sec) .. _data-type-character: ``CHARACTER(n)`` '''''''''''''''' The ``CHARACTER(n)`` (or ``CHAR(n)``) type represents fixed-length, blank padded strings. All Unicode characters are allowed. The optional length specification ``n`` is a positive :ref:`integer ` that defines the maximum length, in characters, of the values that have to be stored or cast. The minimum length is ``1``. The maximum length is defined by the upper :ref:`integer ` range. If the type is used without the length parameter, a length of ``1`` is used. An attempt to store a string literal that exceeds the specified length of the character data type results in an error. :: cr> CREATE TABLE users ( ... id CHARACTER, ... name CHAR(3) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Alice Smith' ... ); SQLParseException['Alice Smith' is too long for the character type of length: 3] If the excess characters are all spaces, the string literal will be truncated to the specified length. :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Bob ' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... id, ... name, ... char_length(name) AS name_length ... FROM users; +----+------+-------------+ | id | name | name_length | +----+------+-------------+ | 1 | Bob | 3 | +----+------+-------------+ SELECT 1 row in set (... sec) :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Bob ' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) cr> DELETE FROM users WHERE id = '1'; DELETE OK, 2 rows affected (... sec) If a value is inserted with a length lower than the defined one, the value will be right padded with whitespaces. :: cr> INSERT INTO users ( ... id, ... name ... ) VALUES ( ... '1', ... 'Bo' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... id, ... name, ... char_length(name) AS name_length ... FROM users; +----+------+-------------+ | id | name | name_length | +----+------+-------------+ | 1 | Bo | 3 | +----+------+-------------+ SELECT 1 row in set (... sec) If a value is explicitly cast to ``CHARACTER(n)``, then an over-length value will be truncated to ``n`` characters without raising an error. :: cr> SELECT 'Alice Smith'::CHARACTER(5) AS name; +-------+ | name | +-------+ | Alice | +-------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE users; DROP OK, 1 row affected (... sec) .. _type-text: ``TEXT`` '''''''' A text-based basic type containing one or more characters. All Unicode characters are allowed. Create table:: cr> CREATE TABLE users ( ... name TEXT ... ); CREATE OK, 1 row affected (... sec) Insert data:: cr> INSERT INTO users ( ... name ... ) VALUES ( ... '🌻 Alice 🌻' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE users; REFRESH OK, 1 row affected (... sec) Query data:: cr> SELECT * FROM users; +-------------+ | name | +-------------+ | 🌻 Alice 🌻 | +-------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE users; DROP OK, 1 row affected (... sec) .. NOTE:: The maximum indexed string length is restricted to 32766 bytes when encoded with UTF-8 unless the string is analyzed using full text or indexing and the usage of the :ref:`ddl-storage-columnstore` is disabled. There is no difference in storage costs among all character data types. .. _data-type-json: ``json`` '''''''' A type representing a JSON string. This type only exists for compatibility and interoperability with PostgreSQL. It cannot to be used in data definition statements and it is not possible to use it to store data. To store JSON data use the existing :ref:`OBJECT ` type. It is a more powerful alternative that offers more flexibility but delivers the same benefits. The primary use of the JSON type is in :ref:`type casting ` for interoperability with PostgreSQL clients which may use the ``JSON`` type. The following type casts are example of supported usage of the ``JSON`` data type: Casting from ``STRING`` to ``JSON``:: cr> SELECT '{"x": 10}'::json; +-------------+ | '{"x": 10}' | +-------------+ | {"x": 10} | +-------------+ SELECT 1 row in set (... sec) Casting from ``JSON`` to ``OBJECT``:: cr> SELECT ('{"x": 10}'::json)::object; +-----------+ | {"x"=10} | +-----------+ | {"x": 10} | +-----------+ SELECT 1 row in set (... sec) Casting from ``OBJECT`` to ``JSON``:: cr> SELECT {x=10}::json; +------------+ | '{"x":10}' | +------------+ | {"x":10} | +------------+ SELECT 1 row in set (... sec) .. _data-types-numeric: Numeric data ------------ CrateDB supports the following numeric types: .. contents:: :local: :depth: 1 .. _data-types-floating-point: .. NOTE:: The :ref:`REAL ` and :ref:`DOUBLE PRECISION ` data types are inexact, variable-precision floating-point types, meaning that these types are stored as an approximation. Accordingly, storage, calculation, and retrieval of the value will not always result in an exact representation of the actual floating-point value. For instance, the result of applying :ref:`SUM ` or :ref:`AVG ` aggregate functions may slightly vary between query executions or comparing floating-point values for equality might not always match. CrateDB conforms to the `IEEE 754`_ standard concerning special values for floating-point data types, meaning that ``NaN``, ``Infinity``, ``-Infinity`` (negative infinity), and ``-0`` (signed zero) are all supported:: cr> SELECT ... 0.0 / 0.0 AS a, ... 1.0 / 0.0 AS B, ... 1.0 / -0.0 AS c; +-----+----------+-----------+ | a | b | c | +-----+----------+-----------+ | NaN | Infinity | -Infinity | +-----+----------+-----------+ SELECT 1 row in set (... sec) These special numeric values can also be inserted into a column of type ``REAL`` or ``DOUBLE PRECISION`` using a :ref:`TEXT ` literal. For instance:: cr> CREATE TABLE my_table ( ... column_1 INTEGER, ... column_2 BIGINT, ... column_3 SMALLINT, ... column_4 DOUBLE PRECISION, ... column_5 REAL, ... column_6 "CHAR" ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... column_4, ... column_5 ... ) VALUES ( ... 'NaN', ... 'Infinity' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... column_4, ... column_5 ... FROM my_table; +----------+----------+ | column_4 | column_5 | +----------+----------+ | NaN | Infinity | +----------+----------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-smallint: ``SMALLINT`` '''''''''''' A small integer. Limited to two bytes, with a range from -32,768 to 32,767. Example:: cr> CREATE TABLE my_table ( ... number SMALLINT ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 32767 ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT number FROM my_table; +--------+ | number | +--------+ | 32767 | +--------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-integer: ``INTEGER`` ''''''''''' An integer. Limited to four bytes, with a range from -2^31 to 2^31-1. Example:: cr> CREATE TABLE my_table ( ... number INTEGER ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 2147483647 ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT number FROM my_table; +------------+ | number | +------------+ | 2147483647 | +------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-bigint: ``BIGINT`` '''''''''' A large integer. Limited to eight bytes, with a range from -2^63 + 1 to 2^63-2. Example: :: cr> CREATE TABLE my_table ( ... number BIGINT ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 9223372036854775806 ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT number FROM my_table; +---------------------+ | number | +---------------------+ | 9223372036854775806 | +---------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-numeric: ``NUMERIC(precision, scale)`` ''''''''''''''''''''''''''''' An exact `fixed-point fractional number`_ with an arbitrary, user-specified precision. Variable size, with up to 38 digits for storage. If using ``NUMERIC`` only for type casts up to 131072 digits before the decimal point and up to 16383 digits after the decimal point are supported. For example, using a :ref:`cast from a string literal `:: cr> SELECT NUMERIC(5, 2) '123.45' AS number; +--------+ | number | +--------+ | 123.45 | +--------+ SELECT 1 row in set (... sec) This type is usually used when it is important to preserve exact precision or handle values that exceed the range of the numeric types of the fixed length. The aggregations and arithmetic operations on numeric values are much slower compared to operations on the integer or floating-point types. The ``NUMERIC`` type can be configured with the ``precision`` and ``scale``. The ``precision`` value of a numeric is the total count of significant digits in the unscaled numeric value. The ``scale`` value of a numeric is the count of decimal digits in the fractional part, to the right of the decimal point. For example, the number 123.45 has a precision of ``5`` and a scale of ``2``. Integers have a scale of zero. The scale must be less than or equal to the precision and greater or equal to zero. To declare the ``NUMERIC`` type with the precision and scale, use the syntax:: NUMERIC(precision, scale) Alternatively, only the precision can be specified, the scale will be zero or positive integer in this case:: NUMERIC(precision) Without configuring the precision and scale the ``NUMERIC`` type value will be represented by an unscaled value of the unlimited precision:: NUMERIC .. NOTE:: ``NUMERIC`` without precision and scale cannot be used in CREATE TABLE statements. To store values of type NUMERIC it is required to define the precision and scale. .. NOTE:: ``NUMERIC`` values returned as results of an SQL query might loose precision when using the :ref:`HTTP interface`, because of limitation of `JSON Data Types`_ for numbers with higher than 53-bits precision. The ``NUMERIC`` type is internally backed by the Java ``BigDecimal`` class. For more detailed information about its behaviour, see `BigDecimal documentation`_. .. _type-real: ``REAL`` '''''''' An inexact `single-precision floating-point`_ value. Limited to four bytes, six decimal digits precision. Example: :: cr> CREATE TABLE my_table ( ... number REAL ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 3.4028235e+38 ... ); INSERT OK, 1 row affected (... sec) .. TIP:: ``3.4028235+38`` represents the value 3.4028235 × 10\ :sup:`38` .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT number FROM my_table; +---------------+ | number | +---------------+ | 3.4028235e+38 | +---------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DELETE FROM my_table; DELETE OK, 1 row affected (... sec) cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) You can insert values which exceed the maximum precision, like so:: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 3.4028234664e+38 ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) However, the recorded value will be an approximation of the original (i.e., the additional precision is lost):: cr> SELECT number FROM my_table; +---------------+ | number | +---------------+ | 3.4028235e+38 | +---------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. SEEALSO:: :ref:`CrateDB floating-point values ` .. _type-double-precision: ``DOUBLE PRECISION`` '''''''''''''''''''' An inexact number with variable precision supporting `double-precision floating-point`_ values. Limited to eight bytes, with 15 decimal digits precision. Example: :: cr> CREATE TABLE my_table ( ... number DOUBLE PRECISION ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 1.7976931348623157e+308 ... ); INSERT OK, 1 row affected (... sec) .. TIP:: ``1.7976931348623157e+308`` represents the value 1.7976931348623157 × 10\ :sup:`308` .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT number FROM my_table; +-------------------------+ | number | +-------------------------+ | 1.7976931348623157e+308 | +-------------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DELETE FROM my_table; DELETE OK, 1 row affected (... sec) cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) You can insert values which exceed the maximum precision, like so:: cr> INSERT INTO my_table ( ... number ... ) VALUES ( ... 1.79769313486231572014e+308 ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) However, the recorded value will be an approximation of the original (i.e., the additional precision is lost):: cr> SELECT number FROM my_table; +-------------------------+ | number | +-------------------------+ | 1.7976931348623157e+308 | +-------------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. SEEALSO:: :ref:`CrateDB floating-point values ` .. _data-types-dates-times: Dates and times --------------- CrateDB supports the following types for dates and times: .. contents:: :local: :depth: 2 With a few exceptions (noted below), the ``+`` and ``-`` :ref:`operators ` can be used to create :ref:`arithmetic expressions ` with temporal operands: +---------------+----------------+---------------+ | Operand | Operator | Operand | +===============+================+===============+ | ``TIMESTAMP`` | ``-`` | ``TIMESTAMP`` | +---------------+----------------+---------------+ | ``INTERVAL`` | ``+`` | ``TIMESTAMP`` | +---------------+----------------+---------------+ | ``TIMESTAMP`` | ``+`` or ``-`` | ``INTERVAL`` | +---------------+----------------+---------------+ | ``INTERVAL`` | ``+`` or ``-`` | ``INTERVAL`` | +---------------+----------------+---------------+ .. NOTE:: If an object column is :ref:`dynamically created `, the type detection will not recognize date and time types, meaning that date and time type columns must always be declared beforehand. .. _type-timestamp: ``TIMESTAMP`` ''''''''''''' A timestamp expresses a specific date and time as the number of milliseconds since the `Unix epoch`_ (i.e., ``1970-01-01T00:00:00Z``). ``TIMESTAMP`` has two variants: - :ref:`TIMESTAMP WITHOUT TIME ZONE ` which presents all values in UTC. - :ref:`TIMESTAMP WITH TIME ZONE ` which presents all values in UTC in respect to the ``TIME ZONE`` related offset. By default a ``TIMESTAMP`` is an alias for :ref:`TIMESTAMP WITHOUT TIME ZONE `. Timestamps can be expressed as string literals (e.g., ``'1970-01-02T00:00:00'``) with the following syntax: .. code-block:: text date-element [time-separator [time-element [offset]]] date-element: yyyy-MM-dd time-separator: 'T' | ' ' time-element: HH:mm:ss [fraction] fraction: '.' digit+ offset: {+ | -} HH [:mm] | 'Z' .. SEEALSO:: For more information about date and time formatting, see `Java 15\: Patterns for Formatting and Parsing`_. Time zone syntax as defined by `ISO 8601 time zone designators`_. Internally, CrateDB stores timestamps as :ref:`BIGINT ` values, which are limited to eight bytes. If you cast a :ref:`BIGINT ` to a ``TIMEZONE``, the integer value will be interpreted as the number of milliseconds since the Unix epoch. Using the :ref:`date_format() ` function, for readability:: cr> SELECT ... date_format(0::TIMESTAMP) AS ts_0, ... date_format(1000::TIMESTAMP) AS ts_1; +-----------------------------+-----------------------------+ | ts_0 | ts_1 | +-----------------------------+-----------------------------+ | 1970-01-01T00:00:00.000000Z | 1970-01-01T00:00:01.000000Z | +-----------------------------+-----------------------------+ SELECT 1 row in set (... sec) If you cast a :ref:`REAL ` or a :ref:`DOUBLE PRECISION ` to a ``TIMESTAMP``, the numeric value will be interpreted as the number of seconds since the Unix epoch, with fractional values approximated to the nearest millisecond:: cr> SELECT ... date_format(0::TIMESTAMP) AS ts_0, ... date_format(1.5::TIMESTAMP) AS ts_1; +-----------------------------+-----------------------------+ | ts_0 | ts_1 | +-----------------------------+-----------------------------+ | 1970-01-01T00:00:00.000000Z | 1970-01-01T00:00:01.500000Z | +-----------------------------+-----------------------------+ SELECT 1 row in set (... sec) If you cast a literal to a ``TIMESTAMP``, years outside the range 0000 to 9999 must be prefixed by the plus or minus symbol. See also `Year.parse Javadoc`_:: cr> SELECT '+292278993-12-31T23:59:59.999Z'::TIMESTAMP as tmstp; +---------------------+ | tmstp | +---------------------+ | 9223372017129599999 | +---------------------+ SELECT 1 row in set (... sec) .. CAUTION:: Due to internal date parsing, the full ``BIGINT`` range is not supported for timestamp values. The valid range of dates is from ``292275054BC`` to ``292278993AD``. When inserting timestamps smaller than ``-999999999999999`` (equal to ``-29719-04-05T22:13:20.001Z``) or bigger than ``999999999999999`` (equal to ``33658-09-27T01:46:39.999Z``) rounding issues may occur. A ``TIMESTAMP`` can be further defined as: .. contents:: :local: :depth: 1 .. _type-timestamp-with-tz: ``WITH TIME ZONE`` .................. If you define a timestamp as ``TIMESTAMP WITH TIME ZONE``, CrateDB will convert string literals to `Coordinated Universal Time`_ (UTC) using the ``offset`` value (e.g., ``+01:00`` for plus one hour or ``Z`` for UTC). Example:: cr> CREATE TABLE my_table ( ... ts_tz_1 TIMESTAMP WITH TIME ZONE, ... ts_tz_2 TIMESTAMP WITH TIME ZONE ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... ts_tz_1, ... ts_tz_2 ... ) VALUES ( ... '1970-01-02T00:00:00', ... '1970-01-02T00:00:00+01:00' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... ts_tz_1, ... ts_tz_2 ... FROM my_table; +----------+----------+ | ts_tz_1 | ts_tz_2 | +----------+----------+ | 86400000 | 82800000 | +----------+----------+ SELECT 1 row in set (... sec) You can use :ref:`date_format() ` to make the output easier to read:: cr> SELECT ... date_format('%Y-%m-%dT%H:%i', ts_tz_1) AS ts_tz_1, ... date_format('%Y-%m-%dT%H:%i', ts_tz_2) AS ts_tz_2 ... FROM my_table; +------------------+------------------+ | ts_tz_1 | ts_tz_2 | +------------------+------------------+ | 1970-01-02T00:00 | 1970-01-01T23:00 | +------------------+------------------+ SELECT 1 row in set (... sec) Notice that ``ts_tz_2`` is smaller than ``ts_tz_1`` by one hour. CrateDB used the ``+01:00`` offset (i.e., *ahead of UTC by one hour*) to convert the second timestamp into UTC prior to insertion. Contrast this with the behavior of :ref:`WITHOUT TIME ZONE `. .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. NOTE:: ``TIMESTAMPTZ`` is an alias for ``TIMESTAMP WITH TIME ZONE``. .. _type-timestamp-without-tz: ``WITHOUT TIME ZONE`` ..................... If you define a timestamp as ``TIMESTAMP WITHOUT TIME ZONE``, CrateDB will convert string literals to `Coordinated Universal Time`_ (UTC) without using the ``offset`` value (i.e., any time zone information present is stripped prior to insertion). Example:: cr> CREATE TABLE my_table ( ... ts_1 TIMESTAMP WITHOUT TIME ZONE, ... ts_2 TIMESTAMP WITHOUT TIME ZONE ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... ts_1, ... ts_2 ... ) VALUES ( ... '1970-01-02T00:00:00', ... '1970-01-02T00:00:00+01:00' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) Using the :ref:`date_format() ` function, for readability:: cr> SELECT ... date_format('%Y-%m-%dT%H:%i', ts_1) AS ts_1, ... date_format('%Y-%m-%dT%H:%i', ts_2) AS ts_2 ... FROM my_table; +------------------+------------------+ | ts_1 | ts_2 | +------------------+------------------+ | 1970-01-02T00:00 | 1970-01-02T00:00 | +------------------+------------------+ SELECT 1 row in set (... sec) Notice that ``ts_1`` and ``ts_2`` are identical. CrateDB ignored the ``+01:00`` offset (i.e., *ahead of UTC by one hour*) when processing the second string literal. Contrast this with the behavior of :ref:`WITH TIME ZONE `. .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-timestamp-at-tz: ``AT TIME ZONE`` ................ You can use the ``AT TIME ZONE`` clause to modify a timestamp in two different ways. It converts a timestamp without time zone to a timestamp with time zone and vice versa. .. contents:: :local: :depth: 1 .. NOTE:: The ``AT TIME ZONE`` type is only supported as a type literal (i.e., for use in SQL :ref:`expressions `, like a :ref:`type cast `, as below). You cannot create table columns of type ``AT TIME ZONE``. .. _type-timestamp-tz-at-tz-convert: Convert a timestamp time zone ````````````````````````````` If you use ``AT TIME ZONE tz`` with a ``TIMESTAMP WITH TIME ZONE``, CrateDB will convert the timestamp to time zone ``tz`` and cast the return value as a :ref:`TIMESTAMP WITHOUT TIME ZONE ` (which discards the time zone information). This process effectively allows you to correct the offset used to calculate UTC. Example:: cr> CREATE TABLE my_table ( ... ts_tz TIMESTAMP WITH TIME ZONE ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... ts_tz ... ) VALUES ( ... '1970-01-02T00:00:00' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) Using the :ref:`date_format() ` function, for readability:: cr> SELECT date_format( ... '%Y-%m-%dT%H:%i', ts_tz AT TIME ZONE '+01:00' ... ) AS ts ... FROM my_table; +------------------+ | ts | +------------------+ | 1970-01-02T01:00 | +------------------+ SELECT 1 row in set (... sec) .. TIP:: The ``AT TIME ZONE`` clause does the same as the :ref:`timezone() ` function:: cr> SELECT ... date_format('%Y-%m-%dT%H:%i', ts_tz AT TIME ZONE '+01:00') AS ts_1, ... date_format('%Y-%m-%dT%H:%i', timezone('+01:00', ts_tz)) AS ts_2 ... FROM my_table; +------------------+------------------+ | ts_1 | ts_2 | +------------------+------------------+ | 1970-01-02T01:00 | 1970-01-02T01:00 | +------------------+------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-timestamp-at-tz-add: Add a timestamp time zone ````````````````````````` If you use ``AT TIME ZONE`` with a :ref:`TIMESTAMP WITHOUT TIME ZONE `, CrateDB will add the missing time zone information, recalculate the timestamp in UTC, and cast the return value as a :ref:`TIMESTAMP WITH TIME ZONE `. Example:: cr> CREATE TABLE my_table ( ... ts TIMESTAMP WITHOUT TIME ZONE ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... ts ... ) VALUES ( ... '1970-01-02T00:00:00' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) Using the :ref:`date_format() ` function, for readability:: cr> SELECT date_format( ... '%Y-%m-%dT%H:%i', ts AT TIME ZONE '+01:00' ... ) AS ts_tz ... FROM my_table; +------------------+ | ts_tz | +------------------+ | 1970-01-01T23:00 | +------------------+ SELECT 1 row in set (... sec) .. TIP:: The ``AT TIME ZONE`` clause does the same as the :ref:`timezone() ` function:: cr> SELECT date_format( ... '%Y-%m-%dT%H:%i', timezone('+01:00', ts) ... ) AS ts_tz ... FROM my_table; +------------------+ | ts_tz | +------------------+ | 1970-01-01T23:00 | +------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _type-time: ``TIME`` '''''''' A ``TIME`` expresses a specific time as the number of milliseconds since midnight along with a time zone offset. Limited to 12 bytes, with a time range from ``00:00:00.000000`` to ``23:59:59.999999`` and a time zone range from ``-18:00`` to ``18:00``. .. CAUTION:: CrateDB does not support ``TIME`` by itself or ``TIME WITHOUT TIME ZONE``. You must always specify ``TIME WITH TIME ZONE`` or its alias ``TIMETZ``. This behaviour does not comply with standard SQL and is incompatible with PostgreSQL. This behavior may change in a future version of CrateDB (see `tracking issue #11491`_). .. NOTE:: The ``TIME`` type is only supported as a type literal (i.e., for use in SQL :ref:`expressions `, like a :ref:`type cast `, as below). You cannot create table columns of type ``TIME``. Times can be expressed as string literals (e.g., ``'13:00:00'``) with the following syntax: .. code-block:: text time-element [offset] time-element: time-only [fraction] time-only: HH[[:][mm[:]ss]] fraction: '.' digit+ offset: {+ | -} time-only | geo-region geo-region: As defined by ISO 8601. Above, ``fraction`` accepts up to six digits, with a precision in microseconds. .. SEEALSO:: For more information about time formatting, see `Java 15\: Patterns for Formatting and Parsing`_. Time zone syntax as defined by `ISO 8601 time zone designators`_. For example:: cr> SELECT '13:00:00'::TIMETZ AS t_tz; +------------------+ | t_tz | +------------------+ | [46800000000, 0] | +------------------+ SELECT 1 row in set (... sec) The value of first element is the number of milliseconds since midnight. The value of the second element is the number of seconds corresponding to the time zone offset (zero in this instance, as no time zone was specified). For example, with a ``+01:00`` time zone:: cr> SELECT '13:00:00+01:00'::TIMETZ AS t_tz; +---------------------+ | t_tz | +---------------------+ | [46800000000, 3600] | +---------------------+ SELECT 1 row in set (... sec) The time zone offset is calculated as 3600 seconds, which is equivalent to an hour. Negative time zone offsets will return negative seconds:: cr> SELECT '13:00:00-01:00'::TIMETZ AS t_tz; +----------------------+ | t_tz | +----------------------+ | [46800000000, -3600] | +----------------------+ SELECT 1 row in set (... sec) Here's an example that uses fractional seconds:: cr> SELECT '13:59:59.999999'::TIMETZ as t_tz; +------------------+ | t_tz | +------------------+ | [50399999999, 0] | +------------------+ SELECT 1 row in set (... sec) .. CAUTION:: The current implementation of the ``TIME`` type has the following limitations: .. rst-class:: open - ``TIME`` types cannot be :ref:`cast ` to any other types (including :ref:`TEXT `) - ``TIME`` types cannot be used in :ref:`arithmetic expressions ` (e.g., with ``TIME``, ``DATE``, and ``INTERVAL`` types) - ``TIME`` types cannot be used with time and date scalar functions (e.g., :ref:`date_format() ` and :ref:`extract() `) This behaviour does not comply with standard SQL and is incompatible with PostgreSQL. This behavior may change in a future version of CrateDB (see `tracking issue #11528`_). .. _type-date: ``DATE`` '''''''' A ``DATE`` expresses a specific year, month and a day in `UTC`_. Internally, CrateDB stores dates as :ref:`BIGINT ` values, which are limited to eight bytes. If you cast a :ref:`BIGINT ` to a ``DATE``, the integer value will be interpreted as the number of milliseconds since the Unix epoch. If you cast a :ref:`REAL ` or a :ref:`DOUBLE PRECISION ` to a ``DATE``, the numeric value will be interpreted as the number of seconds since the Unix epoch. If you cast a literal to a ``DATE``, years outside the range 0000 to 9999 must be prefixed by the plus or minus symbol. See also `Year.parse Javadoc`_:: cr> SELECT '+10000-03-09'::DATE as date; +-----------------+ | date | +-----------------+ | 253408176000000 | +-----------------+ SELECT 1 row in set (... sec) .. CAUTION:: Due to internal date parsing, the full ``BIGINT`` range is not supported for timestamp values. The valid range of dates is from ``292275054BC`` to ``292278993AD``. When inserting dates smaller than ``-999999999999999`` (equal to ``-29719-04-05``) or bigger than ``999999999999999`` (equal to ``33658-09-27``) rounding issues may occur. .. _type-date-warning: .. WARNING:: The ``DATE`` type was not designed to allow time-of-day information (i.e., it is supposed to have a resolution of one day). However, CrateDB allows you violate that constraint by casting any number of milliseconds within limits to a ``DATE`` type. The result is then returned as a :ref:`TIMESTAMP `. When used in conjunction with :ref:`arithmetic expressions `, these ``TIMESTAMP`` values may produce unexpected results. This behaviour does not comply with standard SQL and is incompatible with PostgreSQL. This behavior may change in a future version of CrateDB (see `tracking issue #11528`_). .. CAUTION:: The current implementation of the ``DATE`` type has the following limitations: .. rst-class:: open - ``DATE`` types cannot be added or subtracted to or from other ``DATE`` types as expected (i.e., to calculate the difference between the two in a number of days). Doing so will convert both ``DATE`` values into ``TIMESTAMP`` values before performing the operation, resulting in a ``TIMESTAMP`` value corresponding to a full date and time (see :ref:`WARNING ` above). - :ref:`Numeric data types ` cannot be added to or subtracted from ``DATE`` types as expected (e.g., to increase the date by ``n`` days). Doing so will, for example, convert the ``DATE`` into a ``TIMESTAMP`` and increase the value by ``n`` milliseconds (see :ref:`WARNING ` above). - :ref:`TIME ` types cannot be added to or subtracted from ``DATE`` types. - :ref:`INTERVAL ` types cannot be added to or subtracted from ``DATE`` types. This behaviour does not comply with standard SQL and is incompatible with PostgreSQL. This behavior may change in a future version of CrateDB (see `tracking issue #11528`_). .. NOTE:: The ``DATE`` type is only supported as a type literal (i.e., for use in SQL :ref:`expressions `, like a :ref:`type cast `, as below). You cannot create table columns of type ``DATE``. Dates can be expressed as string literals (e.g., ``'2021-03-09'``) with the following syntax: .. code-block:: text yyyy-MM-dd .. SEEALSO:: For more information about date and time formatting, see `Java 15\: Patterns for Formatting and Parsing`_. For example, using the :ref:`date_format() ` function, for readability:: cr> SELECT ... date_format( ... '%Y-%m-%d', ... '2021-03-09'::DATE ... ) AS date; +------------+ | date | +------------+ | 2021-03-09 | +------------+ SELECT 1 row in set (... sec) .. _type-interval: ``INTERVAL`` '''''''''''' An ``INTERVAL`` represents a span of time. .. NOTE:: The ``INTERVAL`` type is only supported as a type literal (i.e., for use in SQL :ref:`expressions `, like a :ref:`type cast `, as above). You cannot create table columns of type ``INTERVAL``. The basic syntax is:: INTERVAL Where ``unit`` can be any of the following: - ``YEAR`` - ``MONTH`` - ``DAY`` - ``HOUR`` - ``MINUTE`` - ``SECOND`` - ``MILLISECOND`` For example:: cr> SELECT INTERVAL '1' DAY AS result; +----------------+ | result | +----------------+ | 1 day 00:00:00 | +----------------+ SELECT 1 row in set (... sec) Intervals can be positive or negative:: cr> SELECT INTERVAL -'1' DAY AS result; +------------------+ | result | +------------------+ | -1 days 00:00:00 | +------------------+ SELECT 1 row in set (... sec) When using ``SECOND``, you can define fractions of a seconds (with a precision of zero to six digits):: cr> SELECT INTERVAL '1.5' SECOND AS result; +--------------+ | result | +--------------+ | 00:00:01.500 | +--------------+ SELECT 1 row in set (... sec) .. CAUTION:: The ``INTERVAL`` data type does not currently support the input units ``MILLENNIUM``, ``CENTURY``, ``DECADE``, or ``MICROSECOND``. This behaviour does not comply with standard SQL and is incompatible with PostgreSQL. This behavior may change in a future version of CrateDB (see `tracking issue #11490`_). You can also use the following syntax to express an interval:: INTERVAL Where ``string`` describes the interval using one of the recognized formats: +----------------------+-----------------------+---------------------+ | Description | Example | Equivalent | +======================+=======================+=====================+ | Standard SQL format | ``1-2`` | 1 year 2 months | | (year-month) | | | +----------------------+-----------------------+---------------------+ | Standard SQL format | ``1-2 3 4:05:06`` | 1 year 2 months | | | | 3 days 4 hours | | | | 5 minutes 6 seconds | +----------------------+-----------------------+---------------------+ | Standard SQL format | ``3 4:05:06`` | 3 days 4 hours | | (day-time) | | 5 minutes 6 seconds | +----------------------+-----------------------+---------------------+ | `PostgreSQL interval | ``1 year 2 months | 1 year 2 months | | format`_ | 3 days 4 hours | 3 days 4 hours | | | 5 minutes 6 seconds`` | 5 minutes 6 seconds | +----------------------+-----------------------+---------------------+ | `ISO 8601 duration | ``P1Y2M3DT4H5M6S`` | 1 year 2 months | | format`_ | | 3 days 4 hours | | | | 5 minutes 6 seconds | +----------------------+-----------------------+---------------------+ For example:: cr> SELECT INTERVAL '1-2 3 4:05:06' AS result; +-------------------------------+ | result | +-------------------------------+ | 1 year 2 mons 3 days 04:05:06 | +-------------------------------+ SELECT 1 row in set (... sec) You can limit the precision of an interval by specifying `` TO `` after the interval ``string``. For example, you can use ``YEAR TO MONTH`` to limit an interval to a day-month value:: cr> SELECT INTERVAL '1-2 3 4:05:06' YEAR TO MONTH AS result; +------------------------+ | result | +------------------------+ | 1 year 2 mons 00:00:00 | +------------------------+ SELECT 1 row in set (... sec) You can use ``DAY TO HOUR``, as another example, to limit a day-time interval to days and hours:: cr> SELECT INTERVAL '3 4:05:06' DAY TO HOUR AS result; +-----------------+ | result | +-----------------+ | 3 days 04:00:00 | +-----------------+ SELECT 1 row in set (... sec) You can multiply an interval by an integer:: cr> SELECT 2 * INTERVAL '2 years 1 month 10 days' AS result; +---------------------------------+ | result | +---------------------------------+ | 4 years 2 mons 20 days 00:00:00 | +---------------------------------+ SELECT 1 row in set (... sec) .. TIP:: You can use intervals in combination with :ref:`CURRENT_TIMESTAMP ` to calculate values that are offset relative to the current date and time. For example, to calculate a timestamp corresponding to exactly one day ago, use:: cr> SELECT CURRENT_TIMESTAMP - INTERVAL '1' DAY AS result; +---------------+ | result | +---------------+ | ... | +---------------+ SELECT 1 row in set (... sec) .. _data-types-bit-strings: Bit strings ----------- .. _data-type-bit: ``BIT(n)`` '''''''''' A string representation of a bit sequence, useful for visualizing a `bit mask`_. Values of this type can be created using the bit string literal syntax. A bit string starts with the ``B`` prefix, followed by a sequence of ``0`` or ``1`` digits quoted within single quotes ``'``. An example:: B'00010010' The optional length specification ``n`` is a positive :ref:`integer ` that defines the maximum length, in characters, of the values that have to be stored or cast. The minimum length is ``1``. The maximum length is defined by the upper :ref:`integer ` range. For example:: cr> CREATE TABLE my_table ( ... bit_mask BIT(4) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... bit_mask ... ) VALUES ( ... B'0110' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT bit_mask FROM my_table; +----------+ | bit_mask | +----------+ | B'0110' | +----------+ SELECT 1 row in set (... sec) Inserting values that are either too short or too long results in an error:: cr> INSERT INTO my_table ( ... bit_mask ... ) VALUES ( ... B'00101' ... ); SQLParseException[bit string length 5 does not match type bit(4)] .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _data-types-ip-addresses: IP addresses ------------ .. _type-ip: ``IP`` '''''' An ``IP`` is a string representation of an `IP address`_ (IPv4 or IPv6). Internally IP addresses are stored as ``BIGINT`` values, allowing expected sorting, filtering, and aggregation. For example:: cr> CREATE TABLE my_table ( ... fqdn TEXT, ... ip_addr IP ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... fqdn, ... ip_addr ... ) VALUES ( ... 'localhost', ... '127.0.0.1' ... ), ( ... 'router.local', ... 'ff:0:ff:ff:0:ffff:c0a8:64' ... ); INSERT OK, 2 rows affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT fqdn, ip_addr FROM my_table ORDER BY fqdn; +--------------+---------------------------+ | fqdn | ip_addr | +--------------+---------------------------+ | localhost | 127.0.0.1 | | router.local | ff:0:ff:ff:0:ffff:c0a8:64 | +--------------+---------------------------+ SELECT 2 rows in set (... sec) The ``fqdn`` column (see `Fully Qualified Domain Name`_) will accept any value because it was specified as :ref:`TEXT `. However, trying to insert ``fake.ip`` won't work, because it is not a correctly formatted ``IP`` address:: cr> INSERT INTO my_table ( ... fqdn, ... ip_addr ... ) VALUES ( ... 'localhost', ... 'fake.ip' ... ); SQLParseException[Cannot cast `'fake.ip'` of type `text` to type `ip`] .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) IP addresses support the ``<<`` :ref:`operator `, which checks for subnet inclusion using `CIDR notation`_. The left-hand :ref:`operand ` must an :ref:`IP type ` and the right-hand must be :ref:`TEXT type ` (e.g., ``'192.168.1.5' << '192.168.1/24'``). .. _data-types-container: Container types =============== Container types are types with :ref:`nonscalar ` values that may contain other values: .. contents:: :local: :depth: 3 .. _data-types-objects: Objects ------- .. _type-object: ``OBJECT`` '''''''''' An object is structured as a collection of key-values. An object can contain any other type, including further child objects. An ``OBJECT`` column can be schemaless or can have a defined (i.e., enforced) schema. Objects are not the same as JSON objects, although they share a lot of similarities. However, objects can be :ref:`inserted as JSON strings `. Syntax:: OBJECT [ ({DYNAMIC|STRICT|IGNORED}) ] [ AS ( * ) ] The only required syntax is ``OBJECT``. The column policy (``DYNAMIC``, ``STRICT``, or ``IGNORED``) is optional and defaults to :ref:`DYNAMIC `. If the optional list of subcolumns (``columnDefinition``) is omitted, the object will have no schema. CrateDB will create a schema for :ref:`DYNAMIC ` objects upon first insert. Example:: cr> CREATE TABLE my_table ( ... title TEXT, ... quotation OBJECT, ... protagonist OBJECT(STRICT) AS ( ... age INTEGER, ... first_name TEXT, ... details OBJECT AS ( ... birthday TIMESTAMP WITH TIME ZONE ... ) ... ) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... title, ... quotation, ... protagonist ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "words" = 'Curiouser and curiouser!', ... "length" = 3 ... }, ... { ... "age" = '10', ... "first_name" = 'Alice', ... "details" = { ... "birthday" = '1852-05-04T00:00Z'::TIMESTAMPTZ ... } ... } ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... protagonist['first_name'] AS name, ... date_format( ... '%D %b %Y', ... 'GMT', ... protagonist['details']['birthday'] ... ) AS born, ... protagonist['age'] AS age ... FROM my_table; +-------+--------------+-----+ | name | born | age | +-------+--------------+-----+ | Alice | 4th May 1852 | 10 | +-------+--------------+-----+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) New sub-columns can be added to the ``columnDefinition`` at any time. See :ref:`Adding columns ` for details. .. _type-object-column-policy: Object column policy .................... .. _type-object-columns-strict: ``STRICT`` `````````` If the column policy is configured as ``STRICT``, CrateDB will reject any subcolumn that is not defined upfront by ``columnDefinition``. Example:: cr> CREATE TABLE my_table ( ... title TEXT, ... protagonist OBJECT(STRICT) AS ( ... name TEXT ... ) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... title, ... protagonist ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "age" = '10' ... } ... ); SQLParseException[Cannot add column `age` to strict object `protagonist`] The insert above failed because the ``age`` sub-column is not defined. .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. NOTE:: Objects with a ``STRICT`` column policy and no ``columnDefinition`` will have one unusable column that will always be ``NULL``. .. _type-object-columns-dynamic: ``DYNAMIC`` ``````````` If the column policy is configured as ``DYNAMIC`` (the default), inserts may dynamically add new subcolumns to the object definition. Example:: cr> CREATE TABLE my_table ( ... title TEXT, ... quotation OBJECT ... ); CREATE OK, 1 row affected (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) The following statement is equivalent to the above:: cr> CREATE TABLE my_table ( ... title TEXT, ... quotation OBJECT(DYNAMIC) ... ); CREATE OK, 1 row affected (... sec) .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) The following statement is also equivalent to the above:: cr> CREATE TABLE my_table ( ... title TEXT, ... quotation OBJECT(DYNAMIC) AS ( ... words TEXT, ... length SMALLINT ... ) ... ); CREATE OK, 1 row affected (... sec) You can insert using the existing columns:: cr> INSERT INTO my_table ( ... title, ... quotation ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "words" = 'Curiouser and curiouser!', ... "length" = 3 ... } ... ); INSERT OK, 1 row affected (... sec) Or you can add new columns:: cr> INSERT INTO my_table ( ... title, ... quotation ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "words" = 'DRINK ME', ... "length" = 2, ... "chapter" = 1 ... } ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) All rows have the same columns (including newly added columns), but missing records will be returned as :ref:`NULL ` values:: cr> SELECT ... quotation['chapter'] as chapter, ... quotation['words'] as quote ... FROM my_table ... ORDER BY chapter ASC; +---------+--------------------------+ | chapter | quote | +---------+--------------------------+ | 1 | DRINK ME | | NULL | Curiouser and curiouser! | +---------+--------------------------+ SELECT 2 rows in set (... sec) New columns are usable like any other subcolumn. You can retrieve them, sort by them, and use them in where clauses. .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. NOTE:: Adding new columns to an object with a ``DYNAMIC`` policy will affect the schema of the table. Once a column is added, it shows up in the ``information_schema.columns`` table and its type and attributes are fixed. If a new column ``a`` was added with type ``INTEGER``, adding strings to the column will result in an error. Dynamically added columns will always be analyzed as-is with the :ref:`plain analyzer `, which means the column will be indexed but not tokenized in the case of ``TEXT`` columns. .. _type-object-columns-ignored: ``IGNORED`` ``````````` If the column policy is configured as ``IGNORED``, inserts may dynamically add new subcolumns to the object definition. However, dynamically added subcolumns do not cause a schema update and the values contained will not be indexed. Because dynamically created columns are not recorded in the schema, you can insert mixed types into them. For example, one row may insert an integer and the next row may insert an object. Objects with a :ref:`STRICT ` or :ref:`DYNAMIC ` column policy do not allow this. Example:: cr> CREATE TABLE my_table ( ... title TEXT, ... protagonist OBJECT(IGNORED) AS ( ... name TEXT, ... chapter SMALLINT ... ) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... title, ... protagonist ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "name" = 'Alice', ... "chapter" = 1, ... "size" = { ... "value" = 10, ... "units" = 'inches' ... } ... } ... ); INSERT OK, 1 row affected (... sec) :: cr> INSERT INTO my_table ( ... title, ... protagonist ... ) VALUES ( ... 'Alice in Wonderland', ... { ... "name" = 'Alice', ... "chapter" = 2, ... "size" = 'As big as a room' ... } ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table; REFRESH OK, 1 row affected (... sec) :: cr> SELECT ... protagonist['name'] as name, ... protagonist['chapter'] as chapter, ... protagonist['size'] as size ... FROM my_table ... ORDER BY protagonist['chapter'] ASC; +-------+---------+----------------------------------+ | name | chapter | size | +-------+---------+----------------------------------+ | Alice | 1 | {"units": "inches", "value": 10} | | Alice | 2 | As big as a room | +-------+---------+----------------------------------+ SELECT 2 rows in set (... sec) Reflecting the types of the columns:: cr> SELECT ... pg_typeof(protagonist['name']) as name_type, ... pg_typeof(protagonist['chapter']) as chapter_type, ... pg_typeof(protagonist['size']) as size_type ... FROM my_table ... ORDER BY protagonist['chapter'] ASC; +-----------+--------------+-----------+ | name_type | chapter_type | size_type | +-----------+--------------+-----------+ | text | smallint | undefined | | text | smallint | undefined | +-----------+--------------+-----------+ SELECT 2 rows in set (... sec) .. NOTE:: Given that dynamically added sub-columns of an ``IGNORED`` object are not indexed, filter operations on these columns cannot utilize the index and instead a value lookup is performed for each matching row. This can be mitigated by combining a filter using the ``AND`` clause with other predicates on indexed columns. Furthermore, values for dynamically added sub-columns of an ``IGNORED`` objects aren't stored in a column store, which means that ordering on these columns or using them with aggregates is also slower than using the same operations on regular columns. For some operations it may also be necessary to add an explicit type cast because there is no type information available in the schema. An example:: cr> SELECT ... protagonist['name'] as name, ... protagonist['chapter'] as chapter, ... protagonist['size'] as size ... FROM my_table ... ORDER BY protagonist['size']::TEXT ASC; +-------+---------+----------------------------------+ | name | chapter | size | +-------+---------+----------------------------------+ | Alice | 2 | As big as a room | | Alice | 1 | {"units": "inches", "value": 10} | +-------+---------+----------------------------------+ SELECT 2 rows in set (... sec) Given that it is possible have values of different types within the same sub-column of an ignored objects, aggregations may fail at runtime:: cr> SELECT protagonist['size']::BIGINT FROM my_table ORDER BY protagonist['chapter'] LIMIT 1; SQLParseException[Cannot cast value `{value=10, units=inches}` to type `bigint`] .. HIDE: cr> DROP TABLE my_table; DROP OK, 1 row affected (... sec) .. _data-types-object-literals: Object literals ............... You can insert objects using object literals. Object literals are delimited using curly brackets and key-value pairs are connected via ``=``. Synopsis:: { [ ident = expr [ , ... ] ] } Here, ``ident`` is the key and ``expr`` is the value. The key must be a lowercase column identifier or a quoted mixed-case column identifier. The value must be a value literal (object literals are permitted and can be nested in this way). Empty object literal:: {} Boolean type:: { my_bool_column = true } Text type:: { my_str_col = 'this is a text value' } Number types:: { my_int_col = 1234, my_float_col = 5.6 } Array type:: { my_array_column = ['v', 'a', 'l', 'u', 'e'] } Camel case keys must be quoted:: { "CamelCaseColumn" = 'this is a text value' } Nested object:: { nested_obj_colmn = { int_col = 1234, str_col = 'text value' } } You can even specify a :ref:`placeholder parameter ` for a value:: { my_other_column = ? } Combined:: { id = 1, name = 'foo', tags = ['apple'], size = 3.1415, valid = ? } .. NOTE:: Even though they look like JSON, object literals are not JSON. If you want to use JSON, skip to the next subsection. .. SEEALSO:: :ref:`Selecting values from inner objects and nested objects ` .. _data-types-object-json: Inserting objects as JSON ......................... You can insert objects using JSON strings. To do this, you must :ref:`type cast ` the string to an object with an implicit cast (i.e., passing a string into an object column) or an explicit cast (i.e., using the ``::OBJECT`` syntax). .. TIP:: Explicit casts can improve query readability. Below you will find examples from the previous subsection rewritten to use JSON strings with explicit casts. Empty object literal:: '{}'::object Boolean type:: '{ "my_bool_column": true }'::object Text type:: '{ "my_str_col": "this is a text value" }'::object Number types:: '{ "my_int_col": 1234, "my_float_col": 5.6 }'::object Array type:: '{ "my_array_column": ["v", "a", "l", "u", "e"] }'::object Camel case keys:: '{ "CamelCaseColumn": "this is a text value" }'::object Nested object:: '{ "nested_obj_col": { "int_col": 1234, "str_col": "foo" } }'::object .. NOTE:: You cannot use :ref:`placeholder parameters ` inside a JSON string. .. _data-types-arrays: Arrays ------ .. _type-array: ``ARRAY`` ''''''''' An array is structured as a collection of other data types. Arrays can contain the following: * :ref:`Primitive types ` * :ref:`Objects ` * :ref:`Geographic types ` Array types are defined as follows:: cr> CREATE TABLE my_table_arrays ( ... tags ARRAY(TEXT), ... objects ARRAY(OBJECT AS (age INTEGER, name TEXT)) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table_arrays ( ... tags, ... objects ... ) VALUES ( ... ['foo', 'bar'], ... [{"name" = 'Alice', "age" = 33}, {"name" = 'Bob', "age" = 45}] ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table_arrays; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * FROM my_table_arrays; +----------------+------------------------------------------------------------+ | tags | objects | +----------------+------------------------------------------------------------+ | ["foo", "bar"] | [{"age": 33, "name": "Alice"}, {"age": 45, "name": "Bob"}] | +----------------+------------------------------------------------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table_arrays; DROP OK, 1 row affected (... sec) An alternative is the following syntax to refer to arrays:: [] This means ``TEXT[]`` is equivalent to ``ARRAY(text)``. Arrays are always represented as zero or more literal elements inside square brackets (``[]``), for example:: [1, 2, 3] ['Zaphod', 'Ford', 'Arthur'] .. _data-types-array-literals: Array literals .............. Arrays can be written using the array constructor ``ARRAY[]`` or short ``[]``. The array constructor is an :ref:`expression ` that accepts both literals and expressions as its parameters. Parameters may contain zero or more elements. Synopsis:: [ ARRAY ] '[' element [ , ... ] ']' All array elements must have the same data type, which determines the inner type of the array. If an array contains no elements, its element type will be inferred by the context in which it occurs, if possible. Some valid arrays are:: [] [null] [1, 2, 3, 4, 5, 6, 7, 8] ['Zaphod', 'Ford', 'Arthur'] [?] ARRAY[true, false] ARRAY[column_a, column_b] ARRAY[ARRAY[1, 2, 1 + 2], ARRAY[3, 4, 3 + 4]] An alternative way to define arrays is to use string literals and casts to arrays. This requires a string literal that contains the elements separated by comma and enclosed with curly braces:: '{ val1, val2, val3 }' :: cr> SELECT '{ab, CD, "CD", null, "null"}'::ARRAY(TEXT) AS arr; +----------------------------------+ | arr | +----------------------------------+ | ["ab", "CD", "CD", null, "null"] | +----------------------------------+ SELECT 1 row in set (... sec) ``null`` elements are interpreted as ``null`` (none, absent), if you want the literal ``null`` string, it has to be enclosed in double quotes. This variant primarily exists for compatibility with PostgreSQL. The array constructor syntax explained further above is the preferred way to define constant array values. .. _data-types-arrays-nested: Nested arrays ............. You can directly define nested arrays in column definitions: :: CREATE TABLE SensorData (sensorID char(10), readings ARRAY(ARRAY(DOUBLE))); Nested arrays can also be used directly in input and output to UDFs: :: CREATE FUNCTION sort_nested_array("data" ARRAY(ARRAY(DOUBLE)), sort_dimension SMALLINT) RETURNS ARRAY(ARRAY(DOUBLE)) LANGUAGE JAVASCRIPT AS 'function sort_nested_array(data, sort_dimension) { data = data.sort(function compareFn(a, b) { if (a[sort_dimension] < b[sort_dimension]){return -1;} if (a[sort_dimension] > b[sort_dimension]){return 1;} return 0; }); return data; }'; Nested arrays can be constructed using ``ARRAY_AGG`` and accessing them requires an intermediate cast: :: CREATE TABLE metrics (ts TIMESTAMP, reading DOUBLE); INSERT INTO metrics SELECT '2022-11-01',2; INSERT INTO metrics SELECT '2022-10-01',1; WITH sorteddata AS ( SELECT sort_nested_array(ARRAY_AGG([ts,reading]),0) AS nestedarray FROM metrics ) SELECT (nestedarray[generate_series]::ARRAY(DOUBLE))[2] AS "ReadingsSortedByTimestamp" FROM generate_series(1, 2), sorteddata; +---------------------------+ | ReadingsSortedByTimestamp | +---------------------------+ | 1.0 | | 2.0 | +---------------------------+ .. NOTE:: Accessing nested arrays will generally require loading sources directly from disk, and will not be very efficient. If you find yourself using nested arrays frequently, you may want to consider splitting the data up into multiple tables instead. .. NOTE:: Nested arrays cannot be created dynamically, either as a :ref:`top level column ` or as part of a :ref:`dynamic object ` .. _type-row: ``ROW`` ======= A row type is a composite type made up of an arbitrary number of other types, similar to a ``tuple`` in programming languages like ``python``. There is currently no type literal to create values of such a type, but the type is used for the result of table functions used in the select list of a statement if the table function returns more than one column. :: cr> SELECT unnest([1, 2], ['Arthur', 'Trillian']); +----------------------------------------+ | unnest([1, 2], ['Arthur', 'Trillian']) | +----------------------------------------+ | [1, "Arthur"] | | [2, "Trillian"] | +----------------------------------------+ SELECT 2 rows in set (... sec) .. _type-float_vector: ``FLOAT_VECTOR`` ================ A ``float_vector`` type allows to store dense vectors of float values of fixed length. It support :ref:`KNN_MATCH ` for k-nearest neighbour search. This allows you to find vectors in a dataset which are similar to a query vector. The type can't be used as an element type of a regular array. ``float_vector`` values are defined like float arrays. An example:: cr> CREATE TABLE my_vectors ( ... xs FLOAT_VECTOR(2) ... ); CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_vectors (xs) VALUES ([3.14, 27.34]); INSERT OK, 1 row affected (... sec) Inserting a value with a different dimension than declared in ``CREATE TABLE`` results in an error. :: cr> INSERT INTO my_vectors (xs) VALUES ([3.14, 27.34, 38.4]); SQLParseException[The number of vector dimensions does not match the field type] .. HIDE: cr> REFRESH TABLE my_vectors; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * FROM my_vectors; +---------------+ | xs | +---------------+ | [3.14, 27.34] | +---------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_vectors; DROP OK, 1 row affected (... sec) .. _data-types-geo: Geographic types ================ :ref:`Geographic types ` are types with :ref:`nonscalar ` values representing points or shapes in a 2D world: .. contents:: :local: :depth: 3 .. _data-types-geo-point: Geometric points ---------------- .. _type-geo_point: ``GEO_POINT`` ''''''''''''' A ``GEO_POINT`` is a :ref:`geographic data type ` used to store latitude and longitude coordinates. To define a ``GEO_POINT`` column, use:: GEO_POINT Values for columns with the ``GEO_POINT`` type are represented and inserted using an array of doubles in the following format:: [, ] Alternatively, a `WKT`_ string can also be used to declare geo points:: 'POINT ( )' .. NOTE:: Empty geo points are not supported. Additionally, if a column is dynamically created, the type detection won't recognize neither WKT strings nor double arrays. That means columns of type ``GEO_POINT`` must always be declared beforehand. An example:: cr> CREATE TABLE my_table_geo ( ... id INTEGER PRIMARY KEY, ... pin GEO_POINT ... ) WITH (number_of_replicas = 0) CREATE OK, 1 row affected (... sec) Insert using ARRAY syntax:: cr> INSERT INTO my_table_geo ( ... id, pin ... ) VALUES ( ... 1, [13.46738, 52.50463] ... ); INSERT OK, 1 row affected (... sec) Insert using WKT syntax:: cr> INSERT INTO my_table_geo ( ... id, pin ... ) VALUES ( ... 2, 'POINT (9.7417 47.4108)' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table_geo; REFRESH OK, 1 row affected (... sec) Query data:: cr> SELECT * FROM my_table_geo; +----+-----------------------------------------+ | id | pin | +----+-----------------------------------------+ | 1 | [13.467379929497838, 52.50462996773422] | | 2 | [9.741699993610382, 47.410799972712994] | +----+-----------------------------------------+ SELECT 2 rows in set (... sec) .. HIDE: cr> DROP TABLE my_table_geo; DROP OK, 1 row affected (... sec) .. _data-types-geo-shape: Geometric shapes ---------------- .. _type-geo_shape: ``GEO_SHAPE`` ''''''''''''' A ``geo_shape`` is a :ref:`geographic data type ` used to store 2D shapes defined as `GeoJSON geometry objects`_. A ``GEO_SHAPE`` column can store different kinds of `GeoJSON geometry objects`_: - "Point" - "MultiPoint" - "LineString" - "MultiLineString", - "Polygon" - "MultiPolygon" - "GeometryCollection" .. CAUTION:: - 3D coordinates are not supported. - Empty ``Polygon`` and ``LineString`` geo shapes are not supported. An example:: cr> CREATE TABLE my_table_geo ( ... id INTEGER PRIMARY KEY, ... area GEO_SHAPE ... ) WITH (number_of_replicas = 0) CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table_geo ( ... id, area ... ) VALUES ( ... 1, 'POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))' ... ); INSERT OK, 1 row affected (... sec) .. HIDE: cr> REFRESH TABLE my_table_geo; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * FROM my_table_geo; +----+--------------------------------------------------------------------------------------------------------+ | id | area | +----+--------------------------------------------------------------------------------------------------------+ | 1 | {"coordinates": [[[5.0, 5.0], [5.0, 10.0], [10.0, 10.0], [10.0, 5.0], [5.0, 5.0]]], "type": "Polygon"} | +----+--------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE my_table_geo; DROP OK, 1 row affected (... sec) .. _type-geo_shape-definition: Geo shape column definition ........................... To define a ``GEO_SHAPE`` column, use:: GEO_SHAPE A geographical index with default parameters is created implicitly to allow for geographical queries. Its default parameters are:: GEO_SHAPE INDEX USING geohash WITH (precision='50m', distance_error_pct=0.025) There are three geographic index types: ``geohash`` (default), ``quadtree`` and ``bkdtree``. These indices are only allowed on ``geo_shape`` columns. For more information, see :ref:`type-geo_shape-index`. Both ``geohash`` and ``quadtree`` index types accept the following parameters: ``precision`` (Default: ``50m``) Define the maximum precision of the used index and thus for all indexed shapes. Given as string containing a number and an optional distance unit (defaults to ``m``). Supported units are ``inch`` (``in``), ``yard`` (``yd``), ``miles`` (``mi``), ``kilometers`` (``km``), ``meters`` (``m``), ``centimeters`` (``cm``), ``millimeters`` (``mm``). ``distance_error_pct`` (Default: ``0.025`` (2,5%)) The measure of acceptable error for shapes stored in this column expressed as a percentage value of the shape size The allowed maximum is ``0.5`` (50%). The percentage will be taken from the diagonal distance from the center of the bounding box enclosing the shape to the closest corner of the enclosing box. In effect bigger shapes will be indexed with lower precision than smaller shapes. The ratio of precision loss is determined by this setting, that means the higher the ``distance_error_pct`` the smaller the indexing precision. This will have the effect of increasing the indexed shape internally, so e.g. points that are not exactly inside this shape will end up inside it when it comes to querying as the shape has grown when indexed. ``tree_levels`` Maximum number of layers to be used by the ``PrefixTree`` defined by the index type (either ``geohash`` or ``quadtree``. See :ref:`type-geo_shape-index`). This can be used to control the precision of the used index. Since this parameter requires a certain level of understanding of the underlying implementation, users may use the ``precision`` parameter instead. CrateDB uses the ``tree_levels`` parameter internally and this is what is returned via the ``SHOW CREATE TABLE`` statement even if you use the precision parameter. Defaults to the value which is ``50m`` converted to ``precision`` depending on the index type. .. _type-geo_shape-index: Geo shape index structure ......................... Computations on very complex polygons and geometry collections are exact but very expensive. To provide fast queries even on complex shapes, CrateDB uses a different approach to store, analyze and query geo shapes. The available geo shape indexing strategies are based on two primary data structures: Prefix and BKD trees, which are described below. .. rubric:: Prefix Tree The surface of the earth is represented as a number of grid layers each with higher precision. While the upper layer has one grid cell, the layer below contains many cells for the equivalent space. Each grid cell on each layer is addressed in 2d space either by a `Geohash`_ for ``geohash`` trees or by tightly packed coordinates in a `Quadtree`_. Those addresses conveniently share the same address-prefix between lower layers and upper layers. So we are able to use a `Trie`_ to represent the grids, and `Tries`_ can be queried efficiently as their complexity is determined by the tree depth only. A geo shape is transformed into these grid cells. Think of this transformation process as dissecting a vector image into its pixelated counterpart, reasonably accurately. We end up with multiple images each with a better resolution, up to the configured precision. Every grid cell that processed up to the configured precision is stored in an inverted index, creating a mapping from a grid cell to all shapes that touch it. This mapping is our geographic index. The main difference is that the ``geohash`` supports higher precision than the ``quadtree`` tree. Both tree implementations support precision in order of fractions of millimeters. .. rubric:: BKD-tree In the BKD-tree-based (``bkdtree``) approach, a geo shape is decomposed into a collection of triangles. Each triangle is represented as a 7-dimensional point and stored in this format within a BKD-tree. To improve the storage efficiency of triangles within an index, the initial four dimensions are used to represent the bounding box of each triangle. These bounding boxes are stored in the internal nodes of the BKD-tree, while the remaining three dimensions are stored in the leaves to enable the reconstruction of the original triangles. The BKD-tree-based indexing strategy maintains the original shapes with an accuracy of 1 cm. Its primary advantage over the Prefix tree approach lies in its better performance in searching and indexing, coupled with a more efficient use of storage. .. _type-geo_shape-literals: Geo shape literals .................. Columns with the ``GEO_SHAPE`` type are represented and inserted as an object containing a valid `GeoJSON`_ geometry object:: { type = 'Polygon', coordinates = [ [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ] ] } Alternatively a `WKT`_ string can be used to represent a ``GEO_SHAPE`` as well:: 'POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))' .. NOTE:: It is not possible to detect a ``GEO_SHAPE`` type for a dynamically created column. Like with :ref:`type-geo_point` type, ``GEO_SHAPE`` columns need to be created explicitly using either :ref:`sql-create-table` or :ref:`sql-alter-table`. .. _type-geo_shape-geojson-examples: Geo shape GeoJSON examples .......................... Those are examples showing how to insert all possible kinds of GeoJSON types using `WKT`_ syntax. :: cr> CREATE TABLE my_table_geo ( ... id INTEGER PRIMARY KEY, ... area GEO_SHAPE ... ) WITH (number_of_replicas = 0) CREATE OK, 1 row affected (... sec) :: cr> INSERT INTO my_table_geo ( ... id, area ... ) VALUES ... (1, 'POINT (9.7417 47.4108)'), ... (2, 'MULTIPOINT (47.4108 9.7417, 9.7483 47.4106)'), ... (3, 'LINESTRING (47.4108 9.7417, 9.7483 47.4106)'), ... (4, 'MULTILINESTRING ((47.4108 9.7417, 9.7483 47.4106), (52.50463 13.46738, 52.51000 13.47000))'), ... (5, 'POLYGON ((47.4108 9.7417, 9.7483 47.4106, 9.7426 47.4142, 47.4108 9.7417))'), ... (6, 'MULTIPOLYGON (((5 5, 10 5, 10 10, 5 5)), ((6 6, 10 5, 10 10, 6 6)))'), ... (7, 'GEOMETRYCOLLECTION (POINT (9.7417 47.4108), MULTIPOINT (47.4108 9.7417, 9.7483 47.4106))') ... ; INSERT OK, 7 rows affected (... sec) .. HIDE: cr> REFRESH TABLE my_table_geo; REFRESH OK, 1 row affected (... sec) :: cr> SELECT * FROM my_table_geo ORDER BY id; +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | id | area | +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | {"coordinates": [9.7417, 47.4108], "type": "Point"} | | 2 | {"coordinates": [[47.4108, 9.7417], [9.7483, 47.4106]], "type": "MultiPoint"} | | 3 | {"coordinates": [[47.4108, 9.7417], [9.7483, 47.4106]], "type": "LineString"} | | 4 | {"coordinates": [[[47.4108, 9.7417], [9.7483, 47.4106]], [[52.50463, 13.46738], [52.51, 13.47]]], "type": "MultiLineString"} | | 5 | {"coordinates": [[[47.4108, 9.7417], [9.7483, 47.4106], [9.7426, 47.4142], [47.4108, 9.7417]]], "type": "Polygon"} | | 6 | {"coordinates": [[[[5.0, 5.0], [10.0, 5.0], [10.0, 10.0], [5.0, 5.0]]], [[[6.0, 6.0], [10.0, 5.0], [10.0, 10.0], [6.0, 6.0]]]], "type": "MultiPolygon"} | | 7 | {"geometries": [{"coordinates": [9.7417, 47.4108], "type": "Point"}, {"coordinates": [[47.4108, 9.7417], [9.7483, 47.4106]], "type": "MultiPoint"}], "type": "GeometryCollection"} | +----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 7 rows in set (... sec) .. HIDE: cr> DROP TABLE my_table_geo; DROP OK, 1 row affected (... sec) .. _data-types-casting: Type casting ============ A type ``CAST`` specifies a conversion from one data type to another. It will only succeed if the value of the :ref:`expression ` is convertible to the desired data type, otherwise an error is returned. CrateDB supports two equivalent syntaxes for type casts: :: CAST(expression AS TYPE) expression::TYPE .. contents:: :local: :depth: 1 .. _data-types-casting-exp: Cast expressions ---------------- :: CAST(expression AS TYPE) expression::TYPE .. _data-types-casting-fn: Cast functions -------------- .. _fn-cast: ``CAST`` '''''''' Example usages: :: cr> SELECT CAST(port['http'] AS BOOLEAN) AS col FROM sys.nodes LIMIT 1; +------+ | col | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) :: cr> SELECT (2+10)/2::TEXT AS col; +-----+ | col | +-----+ | 6 | +-----+ SELECT 1 row in set (... sec) It is also possible to convert array structures to different data types, e.g. converting an array of integer values to a boolean array. :: cr> SELECT CAST([0,1,5] AS ARRAY(BOOLEAN)) AS active_threads ; +---------------------+ | active_threads | +---------------------+ | [false, true, true] | +---------------------+ SELECT 1 row in set (... sec) .. NOTE:: It is not possible to cast to or from ``OBJECT``, ``GEO_POINT``, and ``GEO_SHAPE`` types. .. _fn-try-cast: ``TRY_CAST`` '''''''''''' While ``CAST`` throws an error for incompatible type casts, ``TRY_CAST`` returns ``null`` in this case. Otherwise the result is the same as with ``CAST``. :: TRY_CAST(expression AS TYPE) Example usages: :: cr> SELECT TRY_CAST('true' AS BOOLEAN) AS col; +------+ | col | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) Trying to cast a ``TEXT`` to ``INTEGER``, will fail with ``CAST`` if ``TEXT`` is no valid integer but return ``null`` with ``TRY_CAST``: :: cr> SELECT TRY_CAST(name AS INTEGER) AS name_as_int FROM sys.nodes LIMIT 1; +-------------+ | name_as_int | +-------------+ | NULL | +-------------+ SELECT 1 row in set (... sec) .. _data-types-casting-str: Cast from string literals ------------------------- This cast operation is applied to a string literal and it effectively initializes a constant of an arbitrary type. Example usages, initializing an ``INTEGER`` and a ``TIMESTAMP`` constant: :: cr> SELECT INTEGER '25' AS int; +-----+ | int | +-----+ | 25 | +-----+ SELECT 1 row in set (... sec) :: cr> SELECT TIMESTAMP WITH TIME ZONE '2029-12-12T11:44:00.24446' AS ts; +---------------+ | ts | +---------------+ | 1891770240244 | +---------------+ SELECT 1 row in set (... sec) .. NOTE:: This cast operation is limited to :ref:`primitive data types ` only. For complex types such as ``ARRAY`` or ``OBJECT``, use the :ref:`data-types-casting-fn` syntax. .. _data-types-postgres: PostgreSQL compatibility ======================== .. contents:: :local: :depth: 1 .. _data-types-postgres-aliases: Type aliases ------------ For compatibility with PostgreSQL we include some type aliases which can be used instead of the CrateDB specific type names. For example, in a type cast:: cr> SELECT 10::INT2 AS INT2; +------+ | int2 | +------+ | 10 | +------+ SELECT 1 row in set (... sec) See the table below for a full list of aliases: +-----------------------+---------------------------------+ | Alias | CrateDB Type | +=======================+=================================+ | ``SHORT`` | ``SMALLINT`` | +-----------------------+---------------------------------+ | ``INT`` | ``INTEGER`` | +-----------------------+---------------------------------+ | ``INT2`` | ``SMALLINT`` | +-----------------------+---------------------------------+ | ``INT4`` | ``INTEGER`` | +-----------------------+---------------------------------+ | ``INT8`` | ``BIGINT`` | +-----------------------+---------------------------------+ | ``LONG`` | ``BIGINT`` | +-----------------------+---------------------------------+ | ``STRING`` | ``TEXT`` | +-----------------------+---------------------------------+ | ``VARCHAR`` | ``TEXT`` | +-----------------------+---------------------------------+ | ``CHARACTER VARYING`` | ``TEXT`` | +-----------------------+---------------------------------+ | ``NAME`` | ``TEXT`` | +-----------------------+---------------------------------+ | ``REGPROC`` | ``TEXT`` | +-----------------------+---------------------------------+ | ``"CHAR"`` | ``BYTE`` | +-----------------------+---------------------------------+ | ``FLOAT`` | ``REAL`` | +-----------------------+---------------------------------+ | ``FLOAT4`` | ``REAL`` | +-----------------------+---------------------------------+ | ``FLOAT8`` | ``DOUBLE PRECISION`` | +-----------------------+---------------------------------+ | ``DOUBLE`` | ``DOUBLE PRECISION`` | +-----------------------+---------------------------------+ | ``DECIMAL`` | ``NUMERIC`` | +-----------------------+---------------------------------+ | ``TIMESTAMP`` | ``TIMESTAMP WITHOUT TIME ZONE`` | +-----------------------+---------------------------------+ | ``TIMESTAMPTZ`` | ``TIMESTAMP WITH TIME ZONE`` | +-----------------------+---------------------------------+ .. NOTE:: The :ref:`PG_TYPEOF ` system :ref:`function ` can be used to resolve the data type of any :ref:`expression `. .. _data-types-postgres-internal: Internal-use types ------------------ .. _type-char: ``"CHAR"`` '''''''''' A one-byte character used internally for enumeration items in the :ref:`PostgreSQL system catalogs `. Specified as a signed integer in the range -128 to 127. .. _type-oid: ``OID`` ''''''' An *Object Identifier* (OID). OIDS are used internally as primary keys in the :ref:`PostgreSQL system catalogs `. The ``OID`` type is mapped to the :ref:`integer ` data type. .. _type-regproc: ``REGPROC`` ''''''''''' An alias for the :ref:`oid ` type. The ``REGPROC`` type is used by tables in the :ref:`postgres-pg_catalog` schema to reference functions in the `pg_proc`_ table. :ref:`Casting ` a ``REGPROC`` type to a :ref:`type-text` or :ref:`integer ` type will result in the corresponding function name or ``oid`` value, respectively. .. _type-regclass: ``REGCLASS`` '''''''''''' An alias for the :ref:`oid ` type. The ``REGCLASS`` type is used by tables in the :ref:`postgres-pg_catalog` schema to reference relations in the `pg_class`_ table. :ref:`Casting ` a ``REGCLASS`` type to a :ref:`type-text` or :ref:`integer ` type will result in the corresponding relation name or ``oid`` value, respectively. .. NOTE:: String values casted to the ``REGCLASS`` type must match a valid relation name identifier, see also :ref:`identifier naming restrictions `. The given relation name won't be validated against existing relations. .. _type-oidvector: ``OIDVECTOR`` ''''''''''''' The ``OIDVECTOR`` type is used to represent one or more :ref:`oid ` values. This type is similar to an :ref:`array ` of integers. However, you cannot use it with any :ref:`scalar functions ` or :ref:`expressions `. .. SEEALSO:: :ref:`PostgreSQL: Object Identifier (OID) types ` .. _BigDecimal documentation: https://docs.oracle.com/en/java/javase/15/docs/api/java.base/java/math/BigDecimal.html .. _bit mask: https://en.wikipedia.org/wiki/Mask_(computing) .. _CIDR notation: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_blocks .. _Coordinated Universal Time: https://en.wikipedia.org/wiki/Coordinated_Universal_Time .. _double-precision floating-point: https://en.wikipedia.org/wiki/Double-precision_floating-point_format .. _fixed-point fractional number: https://en.wikipedia.org/wiki/Fixed-point_arithmetic .. _Fully Qualified Domain Name: https://en.wikipedia.org/wiki/Fully_qualified_domain_name .. _Geohash: https://en.wikipedia.org/wiki/Geohash .. _GeoJSON geometry objects: https://tools.ietf.org/html/rfc7946#section-3.1 .. _GeoJSON: https://geojson.org/ .. _IEEE 754: https://en.wikipedia.org/wiki/IEEE_754 .. _IP address: https://en.wikipedia.org/wiki/IP_address .. _ISO 8601 duration format: https://en.wikipedia.org/wiki/ISO_8601#Durations .. _ISO 8601 time zone designators: https://en.wikipedia.org/wiki/ISO_8601#Time_zone_designators .. _Java 15\: Patterns for Formatting and Parsing: https://docs.oracle.com/en/java/javase/15/docs/api/java.base/java/time/format/DateTimeFormatter.html#patterns .. _pg_class: https://www.postgresql.org/docs/10/static/catalog-pg-class.html .. _pg_proc: https://www.postgresql.org/docs/10/static/catalog-pg-proc.html .. _PostgreSQL interval format: https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT .. _Quadtree: https://en.wikipedia.org/wiki/Quadtree .. _single-precision floating-point: https://en.wikipedia.org/wiki/Single-precision_floating-point_format .. _The PostgreSQL DATE type: https://www.postgresql.org/docs/current/datatype-datetime.html .. _tracking issue #11491: https://github.com/crate/crate/issues/11491 .. _tracking issue #11490: https://github.com/crate/crate/issues/11490 .. _tracking issue #11528: https://github.com/crate/crate/issues/11528 .. _Trie: https://en.wikipedia.org/wiki/Trie .. _Tries: https://en.wikipedia.org/wiki/Trie .. _Unix epoch: https://en.wikipedia.org/wiki/Unix_time .. _UTC: `Coordinated Universal Time`_ .. _WKT: https://en.wikipedia.org/wiki/Well-known_text .. _Year.parse Javadoc: https://docs.oracle.com/javase/8/docs/api/java/time/Year.html#parse-java.lang.CharSequence- .. _JSON Data Types: https://en.wikipedia.org/wiki/JSON#Data_types.. highlight:: psql .. _scalar-functions: .. _builtins-scalar: ================ Scalar functions ================ Scalar functions are :ref:`functions ` that return :ref:`scalars `. .. rubric:: Table of contents .. contents:: :local: .. _scalar-string: String functions ================ .. _scalar-concat: ``concat('first_arg', second_arg, [ parameter , ... ])`` -------------------------------------------------------- Concatenates a variable number of arguments into a single string. It ignores ``NULL`` values. Returns: ``text`` :: cr> select concat('foo', null, 'bar') AS col; +--------+ | col | +--------+ | foobar | +--------+ SELECT 1 row in set (... sec) You can also use the ``||`` :ref:`operator `:: cr> select 'foo' || 'bar' AS col; +--------+ | col | +--------+ | foobar | +--------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``concat`` function regarding the handling of ``NULL`` arguments. It will return ``NULL`` if any of the operands is ``NULL`` while the ``concat`` scalar will return an empty string if both arguments are ``NULL`` and the non-null argument otherwise. .. TIP:: The ``concat`` function can also be used for merging objects: :ref:`concat(object, object) `. .. _scalar-concat-ws: ``concat_ws('separator', second_arg, [ parameter , ... ])`` ------------------------------------------------------------------------------ Concatenates a variable number of arguments into a single string using a separator defined by the first argument. If first argument is ``NULL`` the return value is ``NULL``. Remaining ``NULL`` arguments are ignored. Returns: ``text`` :: cr> select concat_ws(',','foo', null, 'bar') AS col; +---------+ | col | +---------+ | foo,bar | +---------+ SELECT 1 row in set (... sec) .. _scalar-format: ``format('format_string', parameter, [ parameter , ... ])`` ----------------------------------------------------------- Formats a string similar to the C function ``printf``. For details about the format string syntax, see `formatter`_ Returns: ``text`` :: cr> select format('%s.%s', schema_name, table_name) AS fqtable ... from sys.shards ... where table_name = 'locations' ... limit 1; +---------------+ | fqtable | +---------------+ | doc.locations | +---------------+ SELECT 1 row in set (... sec) :: cr> select format('%tY', date) AS year ... from locations ... group by format('%tY', date) ... order by 1; +------+ | year | +------+ | 1979 | | 2013 | +------+ SELECT 2 rows in set (... sec) .. _scalar-substr: ``substr('string', from, [ count ])`` ------------------------------------- Extracts a part of a string. ``from`` specifies where to start and ``count`` the length of the part. Returns: ``text`` :: cr> select substr('crate.io', 3, 2) AS substr; +--------+ | substr | +--------+ | at | +--------+ SELECT 1 row in set (... sec) ``substr('string' FROM 'pattern')`` ----------------------------------- Extract a part from a string that matches a POSIX regular expression pattern. Returns: ``text``. If the pattern contains groups specified via parentheses it returns the first matching group. If the pattern doesn't match, the function returns ``NULL``. :: cr> SELECT ... substring('2023-08-07', '[a-z]') as no_match, ... substring('2023-08-07', '\d{4}-\d{2}-\d{2}') as full_date, ... substring('2023-08-07', '\d{4}-(\d{2})-\d{2}') as month; +----------+------------+-------+ | no_match | full_date | month | +----------+------------+-------+ | NULL | 2023-08-07 | 08 | +----------+------------+-------+ SELECT 1 row in set (... sec) .. _scalar-substring: ``substring(...)`` ------------------ Alias for :ref:`scalar-substr`. .. _scalar-char_length: ``char_length('string')`` ------------------------- Counts the number of characters in a string. Returns: ``integer`` :: cr> select char_length('crate.io') AS char_length; +-------------+ | char_length | +-------------+ | 8 | +-------------+ SELECT 1 row in set (... sec) Each character counts only once, regardless of its byte size. :: cr> select char_length('©rate.io') AS char_length; +-------------+ | char_length | +-------------+ | 8 | +-------------+ SELECT 1 row in set (... sec) .. _scalar-length: ``length(text)`` ---------------- Returns the number of characters in a string. The same as :ref:`char_length `. .. _scalar-bit_length: ``bit_length('string')`` ------------------------ Counts the number of bits in a string. Returns: ``integer`` .. NOTE:: CrateDB uses UTF-8 encoding internally, which uses between 1 and 4 bytes per character. :: cr> select bit_length('crate.io') AS bit_length; +------------+ | bit_length | +------------+ | 64 | +------------+ SELECT 1 row in set (... sec) :: cr> select bit_length('©rate.io') AS bit_length; +------------+ | bit_length | +------------+ | 72 | +------------+ SELECT 1 row in set (... sec) .. _scalar-octet_length: ``octet_length('string')`` -------------------------- Counts the number of bytes (octets) in a string. Returns: ``integer`` :: cr> select octet_length('crate.io') AS octet_length; +--------------+ | octet_length | +--------------+ | 8 | +--------------+ SELECT 1 row in set (... sec) :: cr> select octet_length('©rate.io') AS octet_length; +--------------+ | octet_length | +--------------+ | 9 | +--------------+ SELECT 1 row in set (... sec) .. _scalar-ascii: ``ascii(string)`` ----------------- Returns the ASCII code of the first character. For UTF-8, returns the Unicode code point of the characters. Returns: ``int`` :: cr> SELECT ascii('a') AS a, ascii('🎈') AS b; +----+--------+ | a | b | +----+--------+ | 97 | 127880 | +----+--------+ SELECT 1 row in set (... sec) .. _scalar-chr: ``chr(int)`` ------------ Returns the character with the given code. For UTF-8 the argument is treated as a Unicode code point. Returns: ``string`` :: cr> SELECT chr(65) AS a; +---+ | a | +---+ | A | +---+ SELECT 1 row in set (... sec) .. _scalar-lower: ``lower('string')`` ------------------- Converts all characters to lowercase. ``lower`` does not perform locale-sensitive or context-sensitive mappings. Returns: ``text`` :: cr> select lower('TransformMe') AS lower; +-------------+ | lower | +-------------+ | transformme | +-------------+ SELECT 1 row in set (... sec) .. _scalar-upper: ``upper('string')`` ------------------- Converts all characters to uppercase. ``upper`` does not perform locale-sensitive or context-sensitive mappings. Returns: ``text`` :: cr> select upper('TransformMe') as upper; +-------------+ | upper | +-------------+ | TRANSFORMME | +-------------+ SELECT 1 row in set (... sec) .. _scalar-initcap: ``initcap('string')`` --------------------- Converts the first letter of each word to upper case and the rest to lower case (*capitalize letters*). Returns: ``text`` :: cr> select initcap('heLlo WORLD') AS initcap; +-------------+ | initcap | +-------------+ | Hello World | +-------------+ SELECT 1 row in set (... sec) .. _scalar-sha1: ``sha1('string')`` ------------------ Returns: ``text`` Computes the SHA1 checksum of the given string. :: cr> select sha1('foo') AS sha1; +------------------------------------------+ | sha1 | +------------------------------------------+ | 0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33 | +------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-md5: ``md5('string')`` ----------------- Returns: ``text`` Computes the MD5 checksum of the given string. See :ref:`sha1 ` for an example. .. _scalar-replace: ``replace(text, from, to)`` --------------------------- Replaces all occurrences of ``from`` in ``text`` with ``to``. :: cr> select replace('Hello World', 'World', 'Stranger') AS hello; +----------------+ | hello | +----------------+ | Hello Stranger | +----------------+ SELECT 1 row in set (... sec) .. _scalar-translate: ``translate(string, from, to)`` ------------------------------- Performs several single-character, one-to-one translation in one operation. It translates ``string`` by replacing the characters in the ``from`` set, one-to-one positionally, with their counterparts in the ``to`` set. If ``from`` is longer than ``to``, the function removes the occurrences of the extra characters in ``from``. If there are repeated characters in ``from``, only the first mapping is considered. Synopsis:: translate(string, from, to) Examples:: cr> select translate('Crate', 'Ct', 'Dk') as translation; +-------------+ | translation | +-------------+ | Drake | +-------------+ SELECT 1 row in set (... sec) :: cr> select translate('Crate', 'rCe', 'c') as translation; +-------------+ | translation | +-------------+ | cat | +-------------+ SELECT 1 row in set (... sec) .. _scalar-trim: ``trim({LEADING | TRAILING | BOTH} 'str_arg_1' FROM 'str_arg_2')`` ------------------------------------------------------------------ Removes the longest string containing characters from ``str_arg_1`` (``' '`` by default) from the start, end, or both ends (``BOTH`` is the default) of ``str_arg_2``. If any of the two strings is ``NULL``, the result is ``NULL``. Synopsis:: trim([ [ {LEADING | TRAILING | BOTH} ] [ str_arg_1 ] FROM ] str_arg_2) Examples:: cr> select trim(BOTH 'ab' from 'abcba') AS trim; +------+ | trim | +------+ | c | +------+ SELECT 1 row in set (... sec) :: cr> select trim('ab' from 'abcba') AS trim; +------+ | trim | +------+ | c | +------+ SELECT 1 row in set (... sec) :: cr> select trim(' abcba ') AS trim; +-------+ | trim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-ltrim: ``ltrim(text, [ trimmingText ])`` --------------------------------- Removes set of characters which are matching ``trimmingText`` (``' '`` by default) to the left of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select ltrim('xxxzzzabcba', 'xz') AS ltrim; +-------+ | ltrim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-rtrim: ``rtrim(text, [ trimmingText ])`` --------------------------------- Removes set of characters which are matching ``trimmingText`` (``' '`` by default) to the right of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select rtrim('abcbaxxxzzz', 'xz') AS rtrim; +-------+ | rtrim | +-------+ | abcba | +-------+ SELECT 1 row in set (... sec) .. _scalar-btrim: ``btrim(text, [ trimmingText ])`` --------------------------------- A combination of :ref:`ltrim ` and :ref:`rtrim `, removing the longest string matching ``trimmingText`` from both the start and end of ``text``. If any of the arguments is ``NULL``, the result is ``NULL``. :: cr> select btrim('XXHelloXX', 'XX') AS btrim; +-------+ | btrim | +-------+ | Hello | +-------+ SELECT 1 row in set (... sec) .. _scalar-quote_ident: ``quote_ident(text)`` --------------------- Returns: ``text`` Quotes a provided string argument. Quotes are added only if necessary. For example, if the string contains non-identifier characters, keywords, or would be case-folded. Embedded quotes are properly doubled. The quoted string can be used as an identifier in an SQL statement. :: cr> select pg_catalog.quote_ident('Column name') AS quoted; +---------------+ | quoted | +---------------+ | "Column name" | +---------------+ SELECT 1 row in set (... sec) .. _scalar-left: ``left('string', len)`` ----------------------- Returns the first ``len`` characters of ``string`` when ``len`` > 0, otherwise all but last ``len`` characters. Synopsis:: left(string, len) Examples:: cr> select left('crate.io', 5) AS col; +-------+ | col | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) :: cr> select left('crate.io', -3) AS col; +-------+ | col | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-right: ``right('string', len)`` ------------------------ Returns the last ``len`` characters in ``string`` when ``len`` > 0, otherwise all but first ``len`` characters. Synopsis:: right(string, len) Examples:: cr> select right('crate.io', 2) AS col; +-----+ | col | +-----+ | io | +-----+ SELECT 1 row in set (... sec) :: cr> select right('crate.io', -6) AS col; +-----+ | col | +-----+ | io | +-----+ SELECT 1 row in set (... sec) .. _scalar-lpad: ``lpad('string1', len[, 'string2'])`` ------------------------------------- Fill up ``string1`` to length ``len`` by prepending the characters ``string2`` (a space by default). If ``string1`` is already longer than ``len`` then it is truncated (on the right). Synopsis:: lpad(string1, len[, string2]) Example:: cr> select lpad(' I like CrateDB!!', 41, 'yes! ') AS col; +-------------------------------------------+ | col | +-------------------------------------------+ | yes! yes! yes! yes! yes! I like CrateDB!! | +-------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-rpad: ``rpad('string1', len[, 'string2'])`` ------------------------------------- Fill up ``string1`` to length ``len`` by appending the characters ``string2`` (a space by default). If string1 is already longer than ``len`` then it is truncated. Synopsis:: rpad(string1, len[, string2]) Example:: cr> select rpad('Do you like Crate?', 38, ' yes!') AS col; +----------------------------------------+ | col | +----------------------------------------+ | Do you like Crate? yes! yes! yes! yes! | +----------------------------------------+ SELECT 1 row in set (... sec) .. NOTE:: In both cases, the scalar functions ``lpad`` and ``rpad`` do now accept a length greater than 50000. .. _scalar-encode: ``encode(bytea, format)`` ------------------------- Encode takes a binary string (``hex`` format) and returns a text encoding using the specified format. Supported formats are: ``base64``, ``hex``, and ``escape``. The ``escape`` format replaces unprintable characters with octal byte notation like ``\nnn``. For the reverse function, see :ref:`decode() `. Synopsis:: encode(string1, format) Example:: cr> select encode(E'123\b\t56', 'base64') AS col; +--------------+ | col | +--------------+ | MTIzCAk1Ng== | +--------------+ SELECT 1 row in set (... sec) .. _scalar-decode: ``decode(text, format)`` ------------------------- Decodes a text encoded string using the specified format and returns a binary string (``hex`` format). Supported formats are: ``base64``, ``hex``, and ``escape``. For the reverse function, see :ref:`encode() `. Synopsis:: decode(text1, format) Example:: cr> select decode('T\214', 'escape') AS col; +--------+ | col | +--------+ | \x548c | +--------+ SELECT 1 row in set (... sec) .. _scalar-repeat: ``repeat(text, integer)`` ------------------------- Repeats a string the specified number of times. If the number of repetitions is equal or less than zero then the function returns an empty string. Returns: ``text`` :: cr> select repeat('ab', 3) AS repeat; +--------+ | repeat | +--------+ | ababab | +--------+ SELECT 1 row in set (... sec) .. _scalar-strpos: ``strpos(string, substring)`` ----------------------------- Returns the first 1-based index of the specified substring within string. Returns zero if the substring is not found and ``NULL`` if any of the arguments is ``NULL``. Returns: ``integer`` :: cr> SELECT strpos('crate' , 'ate'); +---+ | 3 | +---+ | 3 | +---+ SELECT 1 row in set (... sec) .. _scalar-position: ``position(substring in string)`` --------------------------------- The ``position()`` scalar function is an alias of the :ref:`scalar-strpos` scalar function. Note that the order of the arguments is reversed. .. _scalar-reverse: ``reverse(text)`` ------------------ Reverses the order of the string. Returns ``NULL`` if the argument is ``NULL``. Returns: ``text`` :: cr> select reverse('abcde') as reverse; +---------+ | reverse | +---------+ | edcba | +---------+ SELECT 1 row in set (... sec) .. _scalar-split_part: ``split_part(text, text, integer)`` ----------------------------------- Splits a string into parts using a delimiter and returns the part at the given index. The first part is addressed by index ``1``. Special Cases: * Returns the empty string if the index is greater than the number of parts. * If any of the arguments is ``NULL``, the result is ``NULL``. * If the delimiter is the empty string, the input string is considered as consisting of exactly one part. Returns: ``text`` Synopsis:: split_part(string, delimiter, index) Example:: cr> select split_part('ab--cdef--gh', '--', 2) AS part; +------+ | part | +------+ | cdef | +------+ SELECT 1 row in set (... sec) .. _scalar-parse_uri: ``parse_uri(text)`` ----------------------------------- Returns: ``object`` Parses the given URI string and returns an object containing the various components of the URI. The returned object has the following properties:: "uri" OBJECT AS ( "scheme" TEXT, "userinfo" TEXT, "hostname" TEXT, "port" INT, "path" TEXT, "query" TEXT, "fragment" TEXT ) .. csv-table:: :header: "URI Component", "Description" :widths: 25, 75 :align: left ``scheme`` , "The scheme of the URI (e.g. ``http``, ``crate``, etc.)" ``userinfo`` , "The decoded user-information component of this URI." ``hostname`` , "The hostname or IP address specified in the URI." ``port`` , "The port number specified in the URI" ``path`` , "The decoded path specified in the URI." ``query`` , "The decoded query string specified in the URI" ``fragment`` , "The query string specified in the URI" .. NOTE:: For URI properties not specified in the input string, ``null`` is returned. Synopsis:: parse_uri(text) Example:: cr> SELECT parse_uri('crate://my_user@cluster.crate.io:5432/doc?sslmode=verify-full') as uri; +------------------------------------------------------------------------------------------------------------------------------------------------------------+ | uri | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"fragment": null, "hostname": "cluster.crate.io", "path": "/doc", "port": 5432, "query": "sslmode=verify-full", "scheme": "crate", "userinfo": "my_user"} | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) If you just want to select a specific URI component, you can use the bracket notation on the returned object:: cr> SELECT parse_uri('crate://my_user@cluster.crate.io:5432')['hostname'] as uri_hostname; +------------------+ | uri_hostname | +------------------+ | cluster.crate.io | +------------------+ SELECT 1 row in set (... sec) .. _scalar-parse_url: ``parse_url(text)`` ----------------------------------- Returns: ``object`` Parses the given URL string and returns an object containing the various components of the URL. The returned object has the following properties:: "url" OBJECT AS ( "scheme" TEXT, "userinfo" TEXT, "hostname" TEXT, "port" INT, "path" TEXT, "query" TEXT, "parameters" OBJECT AS ( "key1" ARRAY(TEXT), "key2" ARRAY(TEXT) ), "fragment" TEXT ) .. csv-table:: :header: "URL Component", "Description" :widths: 25, 75 :align: left ``scheme`` , "The scheme of the URL (e.g. ``https``, ``crate``, etc.)" ``userinfo`` , "The decoded user-information component of this URL." ``hostname`` , "The hostname or IP address specified in the URL." ``port`` , "The port number specified in the URL. If no port number is specified, the default port for the given scheme will be used." ``path`` , "The decoded path specified in the URL." ``query`` , "The decoded query string specified in the URL." ``parameters`` , "For each query parameter included in the URL, the ``parameter`` property holds an object property that stores an array of decoded text values for that specific query parameter." ``fragment`` , "The decoded fragment specified in the URL" .. NOTE:: For URL properties not specified in the input string, ``null`` is returned. Synopsis:: parse_url(text) Example:: cr> SELECT parse_url('https://my_user@cluster.crate.io:8000/doc?sslmode=verify-full') as url; +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | url | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"fragment": null, "hostname": "cluster.crate.io", "parameters": {"sslmode": ["verify-full"]}, "path": "/doc", "port": 8000, "query": "sslmode=verify-full", "scheme": "https", "userinfo": "my_user"} | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) If you just want to select a specific URL component, you can use the bracket notation on the returned object:: cr> SELECT parse_url('https://my_user@cluster.crate.io:5432')['hostname'] as url_hostname; +------------------+ | url_hostname | +------------------+ | cluster.crate.io | +------------------+ SELECT 1 row in set (... sec) Parameter values are always treated as ``text``. There is no conversion of comma-separated parameter values into arrays:: cr> SELECT parse_url('http://crate.io?p1=1,2,3&p1=a&p2[]=1,2,3')['parameters'] as params; +-------------------------------------------+ | params | +-------------------------------------------+ | {"p1": ["1,2,3", "a"], "p2[]": ["1,2,3"]} | +-------------------------------------------+ SELECT 1 row in set (... sec) .. _scalar-date-time: Date and time functions ======================= .. _scalar-date_trunc: ``date_trunc('interval', ['timezone',] timestamp)`` --------------------------------------------------- Returns: ``timestamp with time zone`` Limits a timestamps precision to a given interval. Valid intervals are: * ``second`` * ``minute`` * ``hour`` * ``day`` * ``week`` * ``month`` * ``quarter`` * ``year`` Valid values for ``timezone`` are either the name of a time zone (for example 'Europe/Vienna') or the UTC offset of a time zone (for example '+01:00'). To get a complete overview of all possible values take a look at the `available time zones`_ supported by `Joda-Time`_. The following example shows how to use the ``date_trunc`` function to generate a day based histogram in the ``Europe/Moscow`` timezone:: cr> select ... date_trunc('day', 'Europe/Moscow', date) as day, ... count(*) as num_locations ... from locations ... group by 1 ... order by 1; +---------------+---------------+ | day | num_locations | +---------------+---------------+ | 308523600000 | 4 | | 1367352000000 | 1 | | 1373918400000 | 8 | +---------------+---------------+ SELECT 3 rows in set (... sec) If you don't specify a time zone, ``truncate`` uses UTC time:: cr> select date_trunc('day', date) as day, count(*) as num_locations ... from locations ... group by 1 ... order by 1; +---------------+---------------+ | day | num_locations | +---------------+---------------+ | 308534400000 | 4 | | 1367366400000 | 1 | | 1373932800000 | 8 | +---------------+---------------+ SELECT 3 rows in set (... sec) .. _date-bin: ``date_bin(interval, timestamp, origin)`` ----------------------------------------- ``date_bin`` "bins" the input timestamp to the specified interval, aligned with a specified origin. ``interval`` is an expression of type ``interval``. ``Timestamp`` and ``origin`` are expressions of type ``timestamp with time zone`` or ``timestamp without time zone``. The return type matches the timestamp and origin types and will be either ``timestamp with time zone`` or ``timestamp without time zone``. The return value marks the beginning of the bin into which the input timestamp is placed. If you use an interval with a single unit like ``1 second`` or ``1 minute``, this function returns the same result as :ref:`date_trunc `. Intervals with months and/or year units are not allowed. If the interval is ``1 week``, ``date_bin`` only returns the same result as ``date_trunc`` if the origin is a Monday. If at least one argument is ``NULL``, the return value is ``NULL``. The interval cannot be zero. Negative intervals are allowed and are treated the same as positive intervals. Intervals having month or year units are not supported due to varying length of those units. A timestamp can be binned to an interval of arbitrary length aligned with a custom origin. Examples: :: cr> SELECT date_bin('2 hours'::INTERVAL, ts, ... '2021-01-01T05:00:00Z'::TIMESTAMP) as bin, ... date_format('%y-%m-%d %H:%i', ... date_bin('2 hours'::INTERVAL, ts, '2021-01-01T05:00:00Z'::TIMESTAMP)) ... formatted_bin ... FROM unnest(ARRAY[ ... '2021-01-01T08:30:10Z', ... '2021-01-01T08:38:10Z', ... '2021-01-01T18:18:10Z', ... '2021-01-01T18:18:10Z' ... ]::TIMESTAMP[]) as tbl (ts); +---------------+----------------+ | bin | formatted_bin | +---------------+----------------+ | 1609484400000 | 21-01-01 07:00 | | 1609484400000 | 21-01-01 07:00 | | 1609520400000 | 21-01-01 17:00 | | 1609520400000 | 21-01-01 17:00 | +---------------+----------------+ SELECT 4 rows in set (... sec) .. TIP:: 0 can be used as a shortcut for Unix zero as the origin:: cr> select date_bin('2 hours' :: INTERVAL, ... '2021-01-01T08:30:10Z' :: timestamp without time ZONE, 0) as bin; +---------------+ | bin | +---------------+ | 1609488000000 | +---------------+ SELECT 1 row in set (... sec) Please note, that implicit cast treats numbers as is, i.e. as a timestamp in that zone and if timestamp is in non-UTC zone you might want to set numeric origin to the same zone:: cr> select date_bin('4 hours' :: INTERVAL, ... '2020-01-01T09:00:00+0200'::timestamp with time zone, ... TIMEZONE('+02:00', 0)) as bin; +---------------+ | bin | +---------------+ | 1577858400000 | +---------------+ SELECT 1 row in set (... sec) .. _scalar-extract: ``extract(field from source)`` ------------------------------ ``extract`` is a special :ref:`expression ` that translates to a function which retrieves subcolumns such as day, hour or minute from a timestamp or an interval. The return type depends on the used ``field``. Example with timestamp:: cr> select extract(day from '2014-08-23') AS day; +-----+ | day | +-----+ | 23 | +-----+ SELECT 1 row in set (... sec) Example with interval:: cr> select extract(hour from INTERVAL '5 days 12 hours 45 minutes') AS hour; +------+ | hour | +------+ | 12 | +------+ SELECT 1 row in set (... sec) Synopsis:: EXTRACT( field FROM source ) ``field`` An identifier or string literal which identifies the part of the timestamp or interval that should be extracted. ``source`` An expression that resolves to an interval, or a timestamp (with or without timezone), or is castable to a timestamp. .. NOTE:: When extracting from an :ref:`INTERVAL ` there is normalization of units, up to days e.g.:: cr> SELECT extract(day from INTERVAL '14 years 1250 days 49 hours') AS days; +------+ | days | +------+ | 1252 | +------+ SELECT 1 row in set (... sec) The following fields are supported: ``CENTURY`` | *Return type:* ``integer`` | century of era Returns the ISO representation which is a straight split of the date. Year 2000 century 20 and year 2001 is also century 20. This is different to the GregorianJulian (GJ) calendar system where 2001 would be century 21. ``YEAR`` | *Return type:* ``integer`` | the year field ``QUARTER`` | *Return type:* ``integer`` | the quarter of the year (1 - 4) ``MONTH`` | *Return type:* ``integer`` | the month of the year ``WEEK`` | *Return type:* ``integer`` | the week of the year ``DAY`` | *Return type:* ``integer`` | the day of the month for timestamps, days for intervals ``DAY_OF_MONTH`` | *Return type:* ``integer`` | same as ``day`` ``DAY_OF_WEEK`` | *Return type:* ``integer`` | day of the week. Starting with Monday (1) to Sunday (7) ``DOW`` | *Return type:* ``integer`` | same as ``day_of_week`` ``DAY_OF_YEAR`` | *Return type:* ``integer`` | the day of the year (1 - 365 / 366) ``DOY`` | *Return type:* ``integer`` | same as ``day_of_year`` ``HOUR`` | *Return type:* ``integer`` | the hour field ``MINUTE`` | *Return type:* ``integer`` | the minute field ``SECOND`` | *Return type:* ``integer`` | the second field ``EPOCH`` | *Return type:* ``double precision`` | The number of seconds since Jan 1, 1970. | Can be negative if earlier than Jan 1, 1970. .. _scalar-current_time: ``CURRENT_TIME`` ---------------- The ``CURRENT_TIME`` :ref:`expression ` returns the time in microseconds since midnight UTC at the time the SQL statement was handled. Clock time is looked up at most once within the scope of a single query, to ensure that multiple occurrences of ``CURRENT_TIME`` :ref:`evaluate ` to the same value. Synopsis:: CURRENT_TIME [ ( precision ) ] ``precision`` Must be a positive integer between 0 and 6. The default value is 6. It determines the number of fractional seconds to output. A value of 0 means the time will have second precision, no fractional seconds (microseconds) are given. .. NOTE:: No guarantee is provided about the accuracy of the underlying clock, results may be limited to millisecond precision, depending on the system. .. _scalar-current_timestamp: ``CURRENT_TIMESTAMP`` --------------------- The ``CURRENT_TIMESTAMP`` expression returns the timestamp in milliseconds since midnight UTC at the time the SQL statement was handled. Therefore, the same timestamp value is returned for every invocation of a single statement. Synopsis:: CURRENT_TIMESTAMP [ ( precision ) ] ``precision`` Must be a positive integer between ``0`` and ``3``. The default value is ``3``. This value determines the number of fractional seconds to output. A value of ``0`` means the timestamp will have second precision, no fractional seconds (milliseconds) are given. .. TIP:: To get an offset value of ``CURRENT_TIMESTAMP`` (e.g., this same time one day ago), you can add or subtract an :ref:`interval `, like so:: CURRENT_TIMESTAMP - '1 day'::interval .. NOTE:: If the ``CURRENT_TIMESTAMP`` function is used in :ref:`ddl-generated-columns` it behaves slightly different in ``UPDATE`` operations. In such a case the actual timestamp of each row update is returned. .. _scalar-curdate: ``CURDATE()`` ---------------- The ``CURDATE()`` scalar function is an alias of the :ref:`scalar-current_date` expression. Synopsis:: CURDATE() .. _scalar-current_date: ``CURRENT_DATE`` ---------------- The ``CURRENT_DATE`` expression returns the date in UTC timezone at the time the SQL statement was handled. Clock time is looked up at most once within the scope of a single query, to ensure that multiple occurrences of ``CURRENT_DATE`` evaluate to the same value. Synopsis:: CURRENT_DATE .. _scalar-now: ``now()`` --------- Returns the current date and time in UTC. This is the same as ``current_timestamp`` Returns: ``timestamp with time zone`` Synopsis:: now() .. _scalar-date_format: ``date_format([format_string, [timezone,]] timestamp)`` ------------------------------------------------------- The ``date_format`` function formats a timestamp as string according to the (optional) format string. Returns: ``text`` Synopsis:: DATE_FORMAT( [ format_string, [ timezone, ] ] timestamp ) The only mandatory argument is the ``timestamp`` value to format. It can be any :ref:`expression ` that is safely convertible to timestamp data type with or without timezone. The syntax for the ``format_string`` is 100% compatible to the syntax of the `MySQL date_format`_ function. For reference, the format is listed in detail below: .. csv-table:: :header: "Format Specifier", "Description" ``%a``, "Abbreviated weekday name (Sun..Sat)" ``%b``, "Abbreviated month name (Jan..Dec)" ``%c``, "Month in year, numeric (0..12)" ``%D``, "Day of month as ordinal number (1st, 2nd, ... 24th)" ``%d``, "Day of month, padded to 2 digits (00..31)" ``%e``, "Day of month (0..31)" ``%f``, "Microseconds, padded to 6 digits (000000..999999)" ``%H``, "Hour in 24-hour clock, padded to 2 digits (00..23)" ``%h``, "Hour in 12-hour clock, padded to 2 digits (01..12)" ``%I``, "Hour in 12-hour clock, padded to 2 digits (01..12)" ``%i``, "Minutes, numeric (00..59)" ``%j``, "Day of year, padded to 3 digits (001..366)" ``%k``, "Hour in 24-hour clock (0..23)" ``%l``, "Hour in 12-hour clock (1..12)" ``%M``, "Month name (January..December)" ``%m``, "Month in year, numeric, padded to 2 digits (00..12)" ``%p``, "AM or PM" ``%r``, "Time, 12-hour (``hh:mm:ss`` followed by AM or PM)" ``%S``, "Seconds, padded to 2 digits (00..59)" ``%s``, "Seconds, padded to 2 digits (00..59)" ``%T``, "Time, 24-hour (``hh:mm:ss``)" ``%U``, "Week number, Sunday as first day of the week, first week of the year (01) is the one starting in this year, week 00 starts in last year (00..53)" ``%u``, "Week number, Monday as first day of the week, first week of the year (01) is the one with at least 4 days in this year (00..53)" ``%V``, "Week number, Sunday as first day of the week, first week of the year (01) is the one starting in this year, uses the week number of the last year, if the week started in last year (01..53)" ``%v``, "Week number, Monday as first day of the week, first week of the year (01) is the one with at least 4 days in this year, uses the week number of the last year, if the week started in last year (01..53)" ``%W``, "Weekday name (Sunday..Saturday)" ``%w``, "Day of the week (0=Sunday..6=Saturday)" ``%X``, "Week year, Sunday as first day of the week, numeric, four digits; used with %V" ``%x``, "Week year, Monday as first day of the week, numeric, four digits; used with %v" ``%Y``, "Year, numeric, four digits" ``%y``, "Year, numeric, two digits" ``%%``, "A literal '%' character" ``%x``, "x, for any 'x' not listed above" If no ``format_string`` is given the default format will be used:: %Y-%m-%dT%H:%i:%s.%fZ :: cr> select date_format('1970-01-01') as epoque; +-----------------------------+ | epoque | +-----------------------------+ | 1970-01-01T00:00:00.000000Z | +-----------------------------+ SELECT 1 row in set (... sec) Valid values for ``timezone`` are either the name of a time zone (for example 'Europe/Vienna') or the UTC offset of a time zone (for example '+01:00'). To get a complete overview of all possible values take a look at the `available time zones`_ supported by `Joda-Time`_. The ``timezone`` will be ``UTC`` if not provided:: cr> select date_format('%W the %D of %M %Y %H:%i %p', 0) as epoque; +-------------------------------------------+ | epoque | +-------------------------------------------+ | Thursday the 1st of January 1970 00:00 AM | +-------------------------------------------+ SELECT 1 row in set (... sec) :: cr> select date_format('%Y/%m/%d %H:%i', 'EST', 0) as est_epoque; +------------------+ | est_epoque | +------------------+ | 1969/12/31 19:00 | +------------------+ SELECT 1 row in set (... sec) .. _scalar-timezone: ``timezone(timezone, timestamp)`` --------------------------------- The timezone scalar function converts values of ``timestamp`` without time zone to/from timestamp with time zone. Synopsis:: TIMEZONE(timezone, timestamp) It has two variants depending on the type of ``timestamp``: .. csv-table:: :header: "Type of timestamp", "Return Type", "Description" "timestamp without time zone OR bigint", "timestamp with time zone", "Treat given timestamp without time zone as located in the specified timezone" "timestamp with time zone", "timestamp without time zone", "Convert given timestamp with time zone to the new timezone with no time zone designation" :: cr> select ... 257504400000 as no_tz, ... date_format( ... '%Y-%m-%d %h:%i', 257504400000 ... ) as no_tz_str, ... timezone( ... 'Europe/Madrid', 257504400000 ... ) as in_madrid, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', 257504400000 ... ) ... ) as in_madrid_str; +--------------+------------------+--------------+------------------+ | no_tz | no_tz_str | in_madrid | in_madrid_str | +--------------+------------------+--------------+------------------+ | 257504400000 | 1978-02-28 09:00 | 257500800000 | 1978-02-28 08:00 | +--------------+------------------+--------------+------------------+ SELECT 1 row in set (... sec) :: cr> select ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp with time zone ... ) as epoque, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp with time zone ... ) ... ) as epoque_str; +--------------+------------------+ | epoque | epoque_str | +--------------+------------------+ | 257508000000 | 1978-02-28 10:00 | +--------------+------------------+ SELECT 1 row in set (... sec) :: cr> select ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp without time zone ... ) as epoque, ... date_format( ... '%Y-%m-%d %h:%i', ... timezone( ... 'Europe/Madrid', ... '1978-02-28T10:00:00+01:00'::timestamp without time zone ... ) ... ) as epoque_str; +--------------+------------------+ | epoque | epoque_str | +--------------+------------------+ | 257504400000 | 1978-02-28 09:00 | +--------------+------------------+ SELECT 1 row in set (... sec) .. _scalar-to_char: ``to_char(expression, format_string)`` -------------------------------------- The ``to_char`` function converts a ``timestamp`` or ``interval`` value to a string, based on a given format string. Returns: ``text`` Synopsis:: TO_CHAR( expression, format_string ) Here, ``expression`` can be any value with the type of ``timestamp`` (with or without a timezone) or ``interval``. The syntax for the ``format_string`` differs based the type of the :ref:`expression `. For ``timestamp`` expressions, the ``format_string`` is a template string containing any of the following symbols: +-----------------------+-----------------------------------------------------+ | Pattern | Description | +=======================+=====================================================+ | ``HH`` / ``HH12`` | Hour of day (01-12) | +-----------------------+-----------------------------------------------------+ | ``HH24`` | Hour of day (00-23) | +-----------------------+-----------------------------------------------------+ | ``MI`` | Minute (00-59) | +-----------------------+-----------------------------------------------------+ | ``SS`` | Second (00-59) | +-----------------------+-----------------------------------------------------+ | ``MS`` | Millisecond (000-999) | +-----------------------+-----------------------------------------------------+ | ``US`` | Microsecond (000000-999999) | +-----------------------+-----------------------------------------------------+ | ``FF1`` | Tenth of second (0-9) | +-----------------------+-----------------------------------------------------+ | ``FF2`` | Hundredth of second (00-99) | +-----------------------+-----------------------------------------------------+ | ``FF3`` | Millisecond (000-999) | +-----------------------+-----------------------------------------------------+ | ``FF4`` | Tenth of millisecond (0000-9999) | +-----------------------+-----------------------------------------------------+ | ``FF5`` | Hundredth of millisecond (00000-99999) | +-----------------------+-----------------------------------------------------+ | ``FF6`` | Microsecond (000000-999999) | +-----------------------+-----------------------------------------------------+ | ``SSSS`` / ``SSSSS`` | Seconds past midnight (0-86399) | +-----------------------+-----------------------------------------------------+ | ``AM`` / ``am`` / | Meridiem indicator | | ``PM`` / ``pm`` | | +-----------------------+-----------------------------------------------------+ | ``A.M.`` / ``a.m.`` / | Meridiem indicator (with periods) | | ``P.M.`` / ``p.m.`` | | +-----------------------+-----------------------------------------------------+ | ``Y,YYY`` | 4 digit year with comma | +-----------------------+-----------------------------------------------------+ | ``YYYY`` | 4 digit year | +-----------------------+-----------------------------------------------------+ | ``yyyy`` | 4 digit year | +-----------------------+-----------------------------------------------------+ | ``YYY`` | Last 3 digits of year | +-----------------------+-----------------------------------------------------+ | ``YY`` | Last 2 digits of year | +-----------------------+-----------------------------------------------------+ | ``Y`` | Last digit of year | +-----------------------+-----------------------------------------------------+ | ``IYYY`` | 4 digit ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``IYY`` | Last 3 digits of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``IY`` | Last 2 digits of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``I`` | Last digit of ISO-8601 week-numbering year | +-----------------------+-----------------------------------------------------+ | ``BC`` / ``bc`` / | Era indicator | | ``AD`` / ``ad`` | | +-----------------------+-----------------------------------------------------+ | ``B.C.`` / ``b.c.`` / | Era indicator with periods | | ``A.D.`` / ``a.d.`` | | +-----------------------+-----------------------------------------------------+ | ``MONTH`` / ``Month`` | Full month name (uppercase, capitalized, lowercase) | | / ``month`` | padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``MON`` / ``Mon`` / | Short month name (uppercase, capitalized, | | ``mon`` | lowercase) padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``MM`` | Month number (01-12) | +-----------------------+-----------------------------------------------------+ | ``DAY`` / ``Day`` / | Full day name (uppercase, capitalized, lowercase) | | ``day`` | padded to 9 characters | +-----------------------+-----------------------------------------------------+ | ``DY`` / ``Dy`` / | Short, 3 character day name | | ``dy`` | (uppercase, capitalized, lowercase) | +-----------------------+-----------------------------------------------------+ | ``DDD`` | Day of year (001-366) | +-----------------------+-----------------------------------------------------+ | ``IDDD`` | Day of ISO-8601 week-numbering year, where the | | | first Monday of the first ISO week is day 1 | | | (001-371) | +-----------------------+-----------------------------------------------------+ | ``DD`` | Day of month (01-31) | +-----------------------+-----------------------------------------------------+ | ``D`` | Day of the week, from Sunday (1) to Saturday (7) | +-----------------------+-----------------------------------------------------+ | ``ID`` | ISO-8601 day of the week, from Monday (1) to Sunday | | | (7) | +-----------------------+-----------------------------------------------------+ | ``W`` | Week of month (1-5) | +-----------------------+-----------------------------------------------------+ | ``WW`` | Week number of year (1-53) | +-----------------------+-----------------------------------------------------+ | ``IW`` | Week number of ISO-8601 week-numbering year (01-53) | +-----------------------+-----------------------------------------------------+ | ``CC`` | Century | +-----------------------+-----------------------------------------------------+ | ``J`` | Julian Day | +-----------------------+-----------------------------------------------------+ | ``Q`` | Quarter | +-----------------------+-----------------------------------------------------+ | ``RM`` / ``rm`` | Month in Roman numerals (uppercase, lowercase) | +-----------------------+-----------------------------------------------------+ | ``TZ`` / ``tz`` | Time-zone abbreviation (uppercase, lowercase) | +-----------------------+-----------------------------------------------------+ | ``TZH`` | Time-zone hours | +-----------------------+-----------------------------------------------------+ | ``TZM`` | Time-zone minutes | +-----------------------+-----------------------------------------------------+ | ``OF`` | Time-zone offset from UTC | +-----------------------+-----------------------------------------------------+ Example:: cr> select ... to_char( ... timestamp '1970-01-01T17:31:12', ... 'Day, Month DD - HH12:MI AM YYYY AD' ... ) as ts; +-----------------------------------------+ | ts | +-----------------------------------------+ | Thursday, January 01 - 05:31 PM 1970 AD | +-----------------------------------------+ SELECT 1 row in set (... sec) For ``interval`` expressions, the formatting string accepts the same tokens as ``timestamp`` expressions. The function then uses the timestamp of the specified interval added to the timestamp of ``0000/01/01 00:00:00``:: cr> select ... to_char( ... interval '1 year 3 weeks 200 minutes', ... 'YYYY MM DD HH12:MI:SS' ... ) as interval; +---------------------+ | interval | +---------------------+ | 0001 01 22 03:20:00 | +---------------------+ SELECT 1 row in set (... sec) .. _scalar-pg-age: ``age([timestamp,] timestamp)`` --------------------------------------------------- Returns: :ref:`interval ` between 2 timestamps. Second argument is subtracted from the first one. If at least one argument is ``NULL``, the return value is ``NULL``. If only one timestamp is given, the return value is interval between current_date (at midnight) and the given timestamp. Example:: cr> select pg_catalog.age('2021-10-21'::timestamp, '2021-10-20'::timestamp) ... as age; +----------------+ | age | +----------------+ | 1 day 00:00:00 | +----------------+ SELECT 1 row in set (... sec) cr> select pg_catalog.age(date_trunc('day', CURRENT_DATE)) as age; +----------+ | age | +----------+ | 00:00:00 | +----------+ SELECT 1 row in set (... sec) .. _scalar-geo: Geo functions ============= .. _scalar-distance: ``distance(geo_point1, geo_point2)`` ------------------------------------ Returns: ``double precision`` The ``distance`` function can be used to calculate the distance between two points on earth. It uses the `Haversine formula`_ which gives great-circle distances between 2 points on a sphere based on their latitude and longitude. The return value is the distance in meters. Below is an example of the distance function where both points are specified using WKT. See :ref:`data-types-geo` for more information on the implicit type casting of geo points:: cr> select distance('POINT (10 20)', 'POINT (11 21)') AS col; +-------------------+ | col | +-------------------+ | 152354.3209044634 | +-------------------+ SELECT 1 row in set (... sec) This scalar function can always be used in both the ``WHERE`` and ``ORDER BY`` clauses. With the limitation that one of the arguments must be a literal and the other argument must be a column reference. .. NOTE:: The algorithm of the calculation which is used when the distance function is used as part of the result column list has a different precision than what is stored inside the index which is utilized if the distance function is part of a WHERE clause. For example, if ``select distance(...)`` returns 0.0, an equality check with ``where distance(...) = 0`` might not yield anything at all due to the precision difference. .. _scalar-within: ``within(shape1, shape2)`` -------------------------- Returns: ``boolean`` The ``within`` function returns true if ``shape1`` is within ``shape2``. If that is not the case false is returned. ``shape1`` can either be a ``geo_shape`` or a ``geo_point``. ``shape2`` must be a ``geo_shape``. Below is an example of the ``within`` function which makes use of the implicit type casting from strings in WKT representation to geo point and geo shapes:: cr> select within( ... 'POINT (10 10)', ... 'POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))' ... ) AS is_within; +-----------+ | is_within | +-----------+ | TRUE | +-----------+ SELECT 1 row in set (... sec) This function can always be used within the ``WHERE`` clause. .. _scalar-intersects: ``intersects(geo_shape, geo_shape)`` ------------------------------------ Returns: ``boolean`` The ``intersects`` function returns true if both argument shapes share some points or area, they *overlap*. This also includes two shapes where one lies :ref:`within ` the other. If ``false`` is returned, both shapes are considered *disjoint*. Example:: cr> select ... intersects( ... {type='Polygon', coordinates=[ ... [[13.4252, 52.7096],[13.9416, 52.0997], ... [12.7221, 52.1334],[13.4252, 52.7096]]]}, ... 'LINESTRING(13.9636 52.6763, 13.2275 51.9578, ... 12.9199 52.5830, 11.9970 52.6830)' ... ) as intersects, ... intersects( ... {type='Polygon', coordinates=[ ... [[13.4252, 52.7096],[13.9416, 52.0997], ... [12.7221, 52.1334],[13.4252, 52.7096]]]}, ... 'LINESTRING (11.0742 49.4538, 11.5686 48.1367)' ... ) as disjoint; +------------+----------+ | intersects | disjoint | +------------+----------+ | TRUE | FALSE | +------------+----------+ SELECT 1 row in set (... sec) Due to a limitation on the :ref:`data-types-geo-shape` datatype this function cannot be used in the :ref:`ORDER BY ` clause. .. _scalar-latitude-longitude: ``latitude(geo_point)`` and ``longitude(geo_point)`` ---------------------------------------------------- Returns: ``double precision`` The ``latitude`` and ``longitude`` function return the coordinates of latitude or longitude of a point, or ``NULL`` if not available. The input must be a column of type ``geo_point``, a valid WKT string or a ``double precision`` array. See :ref:`data-types-geo` for more information on the implicit type casting of geo points. Example:: cr> select ... mountain, ... height, ... longitude(coordinates) as "lon", ... latitude(coordinates) as "lat" ... from sys.summits ... order by height desc limit 1; +------------+--------+---------+---------+ | mountain | height | lon | lat | +------------+--------+---------+---------+ | Mont Blanc | 4808 | 6.86444 | 45.8325 | +------------+--------+---------+---------+ SELECT 1 row in set (... sec) Below is an example of the latitude/longitude functions which make use of the implicit type casting from strings to geo point:: cr> select ... latitude('POINT (10 20)') AS lat, ... longitude([10.0, 20.0]) AS long; +------+------+ | lat | long | +------+------+ | 20.0 | 10.0 | +------+------+ SELECT 1 row in set (... sec) .. _scalar-geohash: ``geohash(geo_point)`` ---------------------- Returns: ``text`` Returns a `GeoHash `_ representation based on full precision (12 characters) of the input point, or ``NULL`` if not available. The input has to be a column of type ``geo_point``, a valid WKT string or a ``double precision`` array. See :ref:`data-types-geo` for more information of the implicit type casting of geo points. Example:: cr> select ... mountain, ... height, ... geohash(coordinates) as "geohash" ... from sys.summits ... order by height desc limit 1; +------------+--------+--------------+ | mountain | height | geohash | +------------+--------+--------------+ | Mont Blanc | 4808 | u0huspw99j1r | +------------+--------+--------------+ SELECT 1 row in set (... sec) .. _scalar-area: ``area(geo_shape)`` ---------------------- Returns: ``double precision`` The ``area`` function calculates the area of the input shape in square-degrees. The calculation will use geospatial awareness (AKA `geodetic`_) instead of `Euclidean geometry`_. The input has to be a column of type :ref:`data-types-geo-shape`, a valid `WKT`_ string or `GeoJSON`_. See :ref:`data-types-geo-shape` for more information. Below you can find an example. Example:: cr> select ... round(area('POLYGON ((5 5, 10 5, 10 10, 5 10, 5 5))')) as "area"; +------+ | area | +------+ | 25 | +------+ SELECT 1 row in set (... sec) .. _scalar-math: Mathematical functions ====================== All mathematical functions can be used within ``WHERE`` and ``ORDER BY`` clauses. .. _scalar-abs: ``abs(number)`` --------------- Returns the absolute value of the given number in the datatype of the given number. Example:: cr> select abs(214748.0998) AS a, abs(0) AS b, abs(-214748) AS c; +-------------+---+--------+ | a | b | c | +-------------+---+--------+ | 214748.0998 | 0 | 214748 | +-------------+---+--------+ SELECT 1 row in set (... sec) .. _scalar-sign: ``sign(number)`` ---------------- Returns the sign of a number. This function will return one of the following: - If number > 0, it returns 1.0 - If number = 0, it returns 0.0 - If number < 0, it returns -1.0 - If number is NULL, it returns NULL The data type of the return value is ``numeric`` if the argument is ``numeric`` and ``double precision`` for the rest of numeric types. For example:: cr> select sign(12.34) as a, sign(0) as b, sign (-77) as c, sign(NULL) as d; +-----+-----+------+------+ | a | b | c | d | +-----+-----+------+------+ | 1.0 | 0.0 | -1.0 | NULL | +-----+-----+------+------+ SELECT 1 row in set (... sec) .. _scalar-ceil: ``ceil(number)`` ---------------- Returns the smallest integral value that is not less than the argument. Returns: ``numeric``, ``bigint`` or ``integer`` Return value will be of type ``numeric`` if the input value is of ``numeric`` type, with the same precision and scale as the input type. It will be of ``integer`` if the input value is an ``integer``` or ``float```. If the input value is of type ``bigint`` or ``double precision`` the return value will be of type ``bigint``. Example:: cr> select ceil(29.9) AS col; +-----+ | col | +-----+ | 30 | +-----+ SELECT 1 row in set (... sec) .. _scalar-ceiling: ``ceiling(number)`` ------------------- This is an alias for :ref:`ceil `. .. _scalar-degrees: ``degrees(double precision)`` ----------------------------- Convert the given ``radians`` value to ``degrees``. Returns: ``double precision`` :: cr> select degrees(0.5) AS degrees; +-------------------+ | degrees | +-------------------+ | 28.64788975654116 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-exp: ``exp(number)`` --------------- Returns Euler's number ``e`` raised to the power of the given numeric value. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: > select exp(1.0) AS exp; +-------------------+ | exp | +-------------------+ | 2.718281828459045 | +-------------------+ SELECT 1 row in set (... sec) .. test skipped because java.lang.Math.exp() can return with different precision on different CPUs (e.g.: Apple M1) .. _scalar-floor: ``floor(number)`` ----------------- Returns the largest integral value that is not less than the argument. Returns: ``numeric``, ``bigint`` or ``integer`` Return value will be of type ``numeric`` if the input value is of ``numeric`` type, with the same precision and scale as the input type. It will be of ``integer`` if the input value is an ``integer``` or ``float```. If the input value is of type ``bigint`` or ``double precision`` the return value will be of type ``bigint``. Example:: cr> select floor(29.9) AS floor; +-------+ | floor | +-------+ | 29 | +-------+ SELECT 1 row in set (... sec) .. _scalar-ln: ``ln(number)`` -------------- Returns the natural logarithm of given ``number``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT ln(1) AS ln; +-----+ | ln | +-----+ | 0.0 | +-----+ SELECT 1 row in set (... sec) .. NOTE:: An error is returned for arguments which lead to undefined or illegal results. E.g. ln(0) results in ``minus infinity``, and therefore, an error is returned. .. _scalar-log: ``log(x : number[, b : number])`` --------------------------------- Returns the logarithm of given ``x`` to base ``b``. Returns: ``numeric`` or ``double precision`` When the second argument (``b``) is provided it returns a value of type ``double precision``, even if ``x`` is of type ``numeric``, as it's implicitly casted to ``double precision`` (thus, possibly loosing precision). When it's not provided, then the return value will be of type ``numeric`` with unspecified precision and scale, if the input value is of ``numeric`` type and of `double precision`` for any other arithmetic type. Examples:: cr> SELECT log(100, 10) AS log; +-----+ | log | +-----+ | 2.0 | +-----+ SELECT 1 row in set (... sec) The second argument (``b``) is optional. If not present, base 10 is used:: cr> SELECT log(100) AS log; +-----+ | log | +-----+ | 2.0 | +-----+ SELECT 1 row in set (... sec) .. NOTE:: An error is returned for arguments which lead to undefined or illegal results. E.g. log(0) results in ``minus infinity``, and therefore, an error is returned. The same is true for arguments which lead to a ``division by zero``, as, e.g., log(10, 1) does. .. _scalar-modulus: ``modulus(y, x)`` ----------------- Returns the remainder of ``y/x``. Returns: Same as argument types. :: cr> select modulus(5, 4) AS mod; +-----+ | mod | +-----+ | 1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-mod: ``mod(y, x)`` ----------------- This is an alias for :ref:`modulus `. .. _scalar-power: ``power(a: number, b: number)`` ------------------------------- Returns the given argument ``a`` raised to the power of argument ``b``. Returns: ``double precision`` The return type of the power function is always ``double precision``, even when both the inputs are integral types, in order to be consistent across positive and negative exponents (which will yield decimal types). See below for an example:: cr> SELECT power(2,3) AS pow; +-----+ | pow | +-----+ | 8.0 | +-----+ SELECT 1 row in set (... sec) .. _scalar-radians: ``radians(double precision)`` ----------------------------- Convert the given ``degrees`` value to ``radians``. Returns: ``double precision`` :: cr> select radians(45.0) AS radians; +--------------------+ | radians | +--------------------+ | 0.7853981633974483 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-random: ``random()`` ------------ The ``random`` function returns a random value in the range 0.0 <= X < 1.0. Returns: ``double precision`` .. NOTE:: Every call to ``random`` will yield a new random number. .. _scalar-gen_random_text_uuid: ``gen_random_text_uuid()`` -------------------------- Returns a random time based UUID as ``text``. The returned ID is similar to flake IDs and well suited for use as primary key value. Note that the ID is opaque (i.e., not to be considered meaningful in any way) and the implementation is free to change. .. _scalar-round: ``round(number[, precision])`` ------------------------------ Returns ``number`` rounded to the specified ``precision`` (decimal places). When ``precision`` is not specified, the ``round`` function rounds the input value to the closest integer for ``real`` and ``integer`` data types with ties rounding up, and to the closest ``bigint`` value for ``double precision`` and ``bigint`` data types with ties rounding up. When the data type of the argument is ``numeric``, then it returns the closest ``numeric`` value with the same precision and scale as the input type, with all decimal digits zeroed out, and with ties rounding up. When it is specified, the result's type is ``numeric``. If ``number`` is of ``numeric`` datatype, then the ``numeric`` type of the result has the same precision and scale with the input. If it's of any other arithmetic type, the ``numeric`` datatype of the result has unspecified precision and scale. Notice that ``round(number)`` and ``round(number, 0)`` may return different result types. Examples:: cr> select round(42.2) AS round; +-------+ | round | +-------+ | 42 | +-------+ SELECT 1 row in set (... sec) cr> select round(42.21, 1) AS round; +-------+ | round | +-------+ | 42.2 | +-------+ SELECT 1 row in set (... sec) .. _scalar-trunc: ``trunc(number[, precision])`` ------------------------------ Returns ``number`` truncated to the specified ``precision`` (decimal places). When ``precision`` is not specified, the result's type is an ``integer``, or ``bigint``. When it is specified, the result's type is ``double precision``. Notice that ``trunc(number)`` and ``trunc(number, 0)`` return different result types. See below for examples:: cr> select trunc(29.999999, 3) AS trunc; +--------+ | trunc | +--------+ | 29.999 | +--------+ SELECT 1 row in set (... sec) cr> select trunc(29.999999) AS trunc; +-------+ | trunc | +-------+ | 29 | +-------+ SELECT 1 row in set (... sec) .. _scalar-sqrt: ``sqrt(number)`` ---------------- Returns the square root of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> select sqrt(25.0) AS sqrt; +------+ | sqrt | +------+ | 5.0 | +------+ SELECT 1 row in set (... sec) .. _scalar-sin: ``sin(number)`` --------------- Returns the sine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT sin(1) AS sin; +--------------------+ | sin | +--------------------+ | 0.8414709848078965 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-asin: ``asin(number)`` ---------------- Returns the arcsine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT asin(1) AS asin; +--------------------+ | asin | +--------------------+ | 1.5707963267948966 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-cos: ``cos(number)`` --------------- Returns the cosine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT cos(1) AS cos; +--------------------+ | cos | +--------------------+ | 0.5403023058681398 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-acos: ``acos(number)`` ---------------- Returns the arccosine of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT acos(-1) AS acos; +-------------------+ | acos | +-------------------+ | 3.141592653589793 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-tan: ``tan(number)`` --------------- Returns the tangent of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT tan(1) AS tan; +--------------------+ | tan | +--------------------+ | 1.5574077246549023 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-cot: ``cot(number)`` --------------- Returns the cotangent of the argument that represents the angle expressed in radians. The range of the argument is all real numbers. The cotangent of zero is undefined and returns ``Infinity``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> select cot(1) AS cot; +--------------------+ | cot | +--------------------+ | 0.6420926159343306 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-atan: ``atan(number)`` ---------------- Returns the arctangent of the argument. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT atan(1) AS atan; +--------------------+ | atan | +--------------------+ | 0.7853981633974483 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-atan2: ``atan2(y: number, x: number)`` ------------------------------- Returns the arctangent of ``y/x``. Returns: ``numeric`` or ``double precision`` Return value will be of type ``numeric`` with unspecified precision and scale if the input value ``y`` or ``x`` is of ``numeric`` type, and ``double precision`` for any other arithmetic type. Example:: cr> SELECT atan2(2, 1) AS atan2; +--------------------+ | atan2 | +--------------------+ | 1.1071487177940904 | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-pi: ``pi()`` -------- Returns the π constant. Returns: ``double precision`` :: cr> SELECT pi() AS pi; +-------------------+ | pi | +-------------------+ | 3.141592653589793 | +-------------------+ SELECT 1 row in set (... sec) .. _scalar-regexp: Regular expression functions ============================ The :ref:`regular expression ` functions in CrateDB use `Java Regular Expressions`_. See the API documentation for more details. .. NOTE:: Be aware that, in contrast to the functions, the :ref:`regular expression operator ` uses `Lucene Regular Expressions`_. .. _scalar-regexp_replace: ``regexp_replace(source, pattern, replacement [, flags])`` ---------------------------------------------------------- ``regexp_replace`` can be used to replace every (or only the first) occurrence of a subsequence matching ``pattern`` in the ``source`` string with the ``replacement`` string. If no subsequence in ``source`` matches the regular expression ``pattern``, ``source`` is returned unchanged. Returns: ``text`` ``pattern`` is a Java regular expression. For details on the regexp syntax, see `Java Regular Expressions`_. The ``replacement`` string may contain expressions like ``$N`` where ``N`` is a digit between 0 and 9. It references the nth matched group of ``pattern`` and the matching subsequence of that group will be inserted in the returned string. The expression ``$0`` will insert the whole matching ``source``. By default, only the first occurrence of a subsequence matching ``pattern`` will be replaced. If all occurrences shall be replaced use the ``g`` flag. .. _scalar-regexp_replace-flags: Flags ..... ``regexp_replace`` supports a number of flags as optional parameters. These flags are given as a string containing any of the characters listed below. Order does not matter. +-------+---------------------------------------------------------------------+ | Flag | Description | +=======+=====================================================================+ | ``i`` | enable case insensitive matching | +-------+---------------------------------------------------------------------+ | ``u`` | enable unicode case folding when used together with ``i`` | +-------+---------------------------------------------------------------------+ | ``U`` | enable unicode support for character classes like ``\W`` | +-------+---------------------------------------------------------------------+ | ``s`` | make ``.`` match line terminators, too | +-------+---------------------------------------------------------------------+ | ``m`` | make ``^`` and ``$`` match on the beginning or end of a line | | | too. | +-------+---------------------------------------------------------------------+ | ``x`` | permit whitespace and line comments starting with ``#`` | +-------+---------------------------------------------------------------------+ | ``d`` | only ``\n`` is considered a line-terminator when using ``^``, ``$`` | | | and ``.`` | +-------+---------------------------------------------------------------------+ | ``g`` | replace all occurrences of a subsequence matching ``pattern``, | | | not only the first | +-------+---------------------------------------------------------------------+ .. _scalar-regexp_replace-examples: Examples ........ :: cr> select ... name, ... regexp_replace( ... name, '(\w+)\s(\w+)+', '$1 - $2' ... ) as replaced ... from locations ... order by name limit 5; +---------------------+-----------------------+ | name | replaced | +---------------------+-----------------------+ | | | | Aldebaran | Aldebaran | | Algol | Algol | | Allosimanius Syneca | Allosimanius - Syneca | | Alpha Centauri | Alpha - Centauri | +---------------------+-----------------------+ SELECT 5 rows in set (... sec) :: cr> select ... regexp_replace( ... 'alcatraz', '(foo)(bar)+', '$1baz' ... ) as replaced; +----------+ | replaced | +----------+ | alcatraz | +----------+ SELECT 1 row in set (... sec) :: cr> select ... name, ... regexp_replace( ... name, '([A-Z]\w+) .+', '$1', 'ig' ... ) as replaced ... from locations ... order by name limit 5; +---------------------+--------------+ | name | replaced | +---------------------+--------------+ | | | | Aldebaran | Aldebaran | | Algol | Algol | | Allosimanius Syneca | Allosimanius | | Alpha Centauri | Alpha | +---------------------+--------------+ SELECT 5 rows in set (... sec) .. _scalar-arrays: Array functions =============== .. _scalar-array_append: ``array_append(anyarray, value)`` ---------------------------------------- The ``array_append`` function adds the value at the end of the array Returns: ``array`` :: cr> select ... array_append([1,2,3], 4) AS array_append; +--------------+ | array_append | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` to append values to an array:: cr> select ... [1,2,3] || 4 AS array_append; +--------------+ | array_append | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``array_append`` function regarding the handling of ``NULL`` arguments. It will ignore a ``NULL`` value while the ``array_append`` function will append a ``NULL`` value to the array. .. _scalar-array_cat: ``array_cat(first_array, second_array)`` ---------------------------------------- The ``array_cat`` function concatenates two arrays into one array Returns: ``array`` :: cr> select ... array_cat([1,2,3],[3,4,5,6]) AS array_cat; +-----------------------+ | array_cat | +-----------------------+ | [1, 2, 3, 3, 4, 5, 6] | +-----------------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` with arrays:: cr> select ... [1,2,3] || [4,5,6] || [7,8,9] AS arr; +-----------------------------+ | arr | +-----------------------------+ | [1, 2, 3, 4, 5, 6, 7, 8, 9] | +-----------------------------+ SELECT 1 row in set (... sec) .. _scalar-array_unique: ``array_unique(first_array, [ second_array])`` ---------------------------------------------- The ``array_unique`` function merges two arrays into one array with unique elements Returns: ``array`` :: cr> select ... array_unique( ... [1, 2, 3], ... [3, 4, 4] ... ) AS arr; +--------------+ | arr | +--------------+ | [1, 2, 3, 4] | +--------------+ SELECT 1 row in set (... sec) If the arrays have different types all elements will be cast to a common type based on the type precedence. :: cr> select ... array_unique( ... [10, 20], ... [10.0, 20.3] ... ) AS arr; +--------------------+ | arr | +--------------------+ | [10.0, 20.0, 20.3] | +--------------------+ SELECT 1 row in set (... sec) .. _scalar-array_difference: ``array_difference(first_array, second_array)`` ----------------------------------------------- The ``array_difference`` function removes elements from the first array that are contained in the second array. Returns: ``array`` :: cr> select ... array_difference( ... [1,2,3,4,5,6,7,8,9,10], ... [2,3,6,9,15] ... ) AS arr; +---------------------+ | arr | +---------------------+ | [1, 4, 5, 7, 8, 10] | +---------------------+ SELECT 1 row in set (... sec) .. _scalar-array: ``array(subquery)`` ------------------- The ``array(subquery)`` :ref:`expression ` is an array constructor function which operates on the result of the ``subquery``. Returns: ``array`` .. SEEALSO:: :ref:`Array construction with subquery ` .. _scalar-array_upper: ``array_upper(anyarray, dimension)`` ------------------------------------ The ``array_upper`` function returns the number of elements in the requested array dimension (the upper bound of the dimension). CrateDB allows mixing arrays with different sizes on the same dimension. Returns ``NULL`` if array argument is ``NULL`` or if dimension <= 0 or if dimension is ``NULL``. Returns: ``integer`` :: cr> select array_upper([[1, 4], [3]], 1) AS size; +------+ | size | +------+ | 2 | +------+ SELECT 1 row in set (... sec) An empty array has no dimension and returns ``NULL`` instead of ``0``. :: cr> select array_upper(ARRAY[]::int[], 1) AS size; +------+ | size | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-array_length: ``array_length(anyarray, dimension)`` ------------------------------------- An alias for :ref:`scalar-array_upper`. :: cr> select array_length([[1, 4], [3]], 1) AS len; +-----+ | len | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_lower: ``array_lower(anyarray, dimension)`` ------------------------------------ The ``array_lower`` function returns the lower bound of the requested array dimension (which is ``1`` if the dimension is valid and has at least one element). Returns ``NULL`` if array argument is ``NULL`` or if dimension <= 0 or if dimension is ``NULL``. Returns: ``integer`` :: cr> select array_lower([[1, 4], [3]], 1) AS size; +------+ | size | +------+ | 1 | +------+ SELECT 1 row in set (... sec) If there is at least one empty array or ``NULL`` on the requested dimension return value is ``NULL``. Example: :: cr> select array_lower([[1, 4], [3], []], 2) AS size; +------+ | size | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-array_set: ``array_set(array, index, value)`` ---------------------------------- The ``array_set`` function returns the array with the element at ``index`` set to ``value``. Gaps are filled with ``null``. Returns: ``array`` :: cr> select array_set(['_', 'b'], 1, 'a') AS arr; +------------+ | arr | +------------+ | ["a", "b"] | +------------+ SELECT 1 row in set (... sec) ``array_set(source_array, indexes_array, values_array)`` -------------------------------------------------------- Second overload for ``array_set`` that updates many indices with many values at once. Depending on the indexes provided, ``array_set`` updates or appends the values and also fills any gaps with ``nulls``. Returns: ``array`` :: cr> select array_set(['_', 'b'], [1, 4], ['a', 'd']) AS arr; +-----------------------+ | arr | +-----------------------+ | ["a", "b", null, "d"] | +-----------------------+ SELECT 1 row in set (... sec) .. NOTE:: Updating indexes less than or equal to 0 is not supported. .. _scalar-array_slice: ``array_slice(anyarray, from, to)`` ----------------------------------- The ``array_slice`` function returns a slice of the given array using the given lower and upper bound. Returns: ``array`` .. SEEALSO:: :ref:`Accessing arrays` :: cr> select array_slice(['a', 'b', 'c', 'd'], 2, 3) AS arr; +------------+ | arr | +------------+ | ["b", "c"] | +------------+ SELECT 1 row in set (... sec) .. NOTE:: The first index value is ``1``. The maximum array index is ``2147483647``. Both the ``from`` and ``to`` index values are inclusive. Using an index greater than the array size results in an empty array. .. _scalar-array_to_string: ``array_to_string(anyarray, separator, [ null_string ])`` --------------------------------------------------------- The ``array_to_string`` function concatenates elements of the given array into a single string using the ``separator``. Returns: ``text`` :: cr> select ... array_to_string( ... ['Arthur', 'Ford', 'Trillian'], ',' ... ) AS str; +----------------------+ | str | +----------------------+ | Arthur,Ford,Trillian | +----------------------+ SELECT 1 row in set (... sec) If the ``separator`` argument is ``NULL``, the result is ``NULL``:: cr> select ... array_to_string( ... ['Arthur', 'Ford', 'Trillian'], NULL ... ) AS str; +------+ | str | +------+ | NULL | +------+ SELECT 1 row in set (... sec) If ``null_string`` is provided and is not ``NULL``, then ``NULL`` elements of the array are replaced by that string, otherwise they are omitted:: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',', 'Ford' ... ) AS str; +----------------------+ | str | +----------------------+ | Arthur,Ford,Trillian | +----------------------+ SELECT 1 row in set (... sec) :: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',' ... ) AS str; +-----------------+ | str | +-----------------+ | Arthur,Trillian | +-----------------+ SELECT 1 row in set (... sec) :: cr> select ... array_to_string( ... ['Arthur', NULL, 'Trillian'], ',', NULL ... ) AS str; +-----------------+ | str | +-----------------+ | Arthur,Trillian | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array: ``string_to_array(string, separator, [ null_string ])`` ------------------------------------------------------- The ``string_to_array`` splits a string into an array of ``text`` elements using a supplied separator and an optional null-string to set matching substring elements to NULL. Returns: ``array(text)`` :: cr> select string_to_array('Arthur,Ford,Trillian', ',') AS arr; +--------------------------------+ | arr | +--------------------------------+ | ["Arthur", "Ford", "Trillian"] | +--------------------------------+ SELECT 1 row in set (... sec) :: cr> select string_to_array('Arthur,Ford,Trillian', ',', 'Ford') AS arr; +------------------------------+ | arr | +------------------------------+ | ["Arthur", null, "Trillian"] | +------------------------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array-separator: ``separator`` ............. If the ``separator`` argument is NULL, each character of the input string becomes a separate element in the resulting array. :: cr> select string_to_array('Ford', NULL) AS arr; +----------------------+ | arr | +----------------------+ | ["F", "o", "r", "d"] | +----------------------+ SELECT 1 row in set (... sec) If the separator is an empty string, then the entire input string is returned as a one-element array. :: cr> select string_to_array('Arthur,Ford', '') AS arr; +-----------------+ | arr | +-----------------+ | ["Arthur,Ford"] | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-string_to_array-null_string: ``null_string`` ............... If the ``null_string`` argument is omitted or NULL, none of the substrings of the input will be replaced by NULL. .. _scalar-array_min: ``array_min(array)`` -------------------- The ``array_min`` function returns the smallest element in ``array``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any of the :ref:`primitive types `. :: cr> SELECT array_min([3, 2, 1]) AS min; +-----+ | min | +-----+ | 1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_position: ``array_position(anycompatiblearray, anycompatible [, integer ] ) → integer`` ----------------------------------------------------------------------------- The ``array_position`` function returns the position of the first occurrence of the second argument in the ``array``, or ``NULL`` if it's not present. If the third argument is given, the search begins at that position. The third argument is ignored if it's null. If not within the ``array`` range, ``NULL`` is returned. It is also possible to search for ``NULL`` values. :: cr> SELECT array_position([1,3,7,4], 7) as position; +----------+ | position | +----------+ | 3 | +----------+ SELECT 1 row in set (... sec) Begin the search from given position (optional). :: cr> SELECT array_position([1,3,7,4], 7, 2) as position; +----------+ | position | +----------+ | 3 | +----------+ SELECT 1 row in set (... sec) .. TIP:: When searching for the existence of an ``array`` element, using the :ref:`ANY ` operator inside the ``WHERE`` clause is much more efficient as it can utilize the index whereas ``array_position`` won't even when used inside the ``WHERE`` clause. .. _scalar-array_prepend: ``array_prepend(value, anyarray)`` ---------------------------------- The ``array_prepend`` function prepends a value to the beginning of the array. Returns: ``array`` :: cr> select ... array_prepend(1, [2,3,4]) AS array_prepend; +---------------+ | array_prepend | +---------------+ | [1, 2, 3, 4] | +---------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` to prepend values to an array:: cr> select ... 1 || [2,3,4] AS array_prepend; +---------------+ | array_prepend | +---------------+ | [1, 2, 3, 4] | +---------------+ SELECT 1 row in set (... sec) .. NOTE:: The ``||`` operator differs from the ``array_prepend`` function regarding the handling of ``NULL`` arguments. It will ignore a ``NULL`` value while the ``array_prepend`` function will prepend a ``NULL`` value to the array. .. _scalar-array_max: ``array_max(array)`` -------------------- The ``array_max`` function returns the largest element in ``array``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any of the :ref:`primitive types `. :: cr> SELECT array_max([1,2,3]) AS max; +-----+ | max | +-----+ | 3 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_sum: ``array_sum(array)`` -------------------- Returns the sum of array elements that are not ``NULL``. If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any :ref:`numeric types `. For ``real`` and ``double precison`` arguments, the return type is equal to the argument type. For ``char``, ``smallint``, ``integer``, and ``bigint`` arguments, the return type changes to ``bigint``. If any ``bigint`` value exceeds range limits (-2^64 to 2^64-1), an ``ArithmeticException`` will be raised. :: cr> SELECT array_sum([1,2,3]) AS sum; +-----+ | sum | +-----+ | 6 | +-----+ SELECT 1 row in set (... sec) The sum on the bigint array will result in an overflow in the following query: :: cr> SELECT ... array_sum( ... [9223372036854775807, 9223372036854775807] ... ) as sum; ArithmeticException[long overflow] To address the overflow of the sum of the given array elements, we cast the array to the numeric data type: :: cr> SELECT ... array_sum( ... [9223372036854775807, 9223372036854775807]::numeric[] ... ) as sum; +----------------------+ | sum | +----------------------+ | 18446744073709551614 | +----------------------+ SELECT 1 row in set (... sec) .. _scalar-array_avg: ``array_avg(array)`` -------------------- Returns the average of all values in ``array`` that are not ``NULL`` If ``array`` is ``NULL`` or an empty array, the function returns ``NULL``. This function supports arrays of any :ref:`numeric types `. For ``real`` and ``double precison`` arguments, the return type is equal to the argument type. For ``char``, ``smallint``, ``integer``, and ``bigint`` arguments, the return type is ``numeric``. :: cr> SELECT array_avg([1,2,3]) AS avg; +-----+ | avg | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) .. _scalar-array_unnest: ``array_unnest(nested_array)`` ------------------------------ Takes a nested array and returns a flattened array. Only flattens one level at a time. Returns ``NULL`` if the argument is ``NULL``. ``NULL`` array elements are skipped and ``NULL`` leaf elements within arrays are preserved. :: cr> SELECT array_unnest([[1, 2], [3, 4, 5]]) AS result; +-----------------+ | result | +-----------------+ | [1, 2, 3, 4, 5] | +-----------------+ SELECT 1 row in set (... sec) cr> SELECT array_unnest([[1, null, 2], null, [3, 4, 5]]) AS result; +-----------------------+ | result | +-----------------------+ | [1, null, 2, 3, 4, 5] | +-----------------------+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`UNNEST table function ` .. _scalar-null-or-empty-array: ``null_or_empty(array)`` ------------------------- The ``null_or_empty(array)`` function returns a Boolean indicating if an array is ``NULL`` or empty (``[]``). This can serve as a faster alternative to ``IS NULL`` if matching on empty array is acceptable. It makes better use of indices. :: cr> SELECT null_or_empty([]) w, ... null_or_empty([[]]) x, ... null_or_empty(NULL) y, ... null_or_empty([1]) z; +------+-------+------+-------+ | w | x | y | z | +------+-------+------+-------+ | TRUE | FALSE | TRUE | FALSE | +------+-------+------+-------+ SELECT 1 row in set (... sec) .. _scalar-objects: Object functions ================ .. _scalar-object_keys: ``object_keys(object)`` ----------------------- The ``object_keys`` function returns the set of first level keys of an ``object``. Returns: ``array(text)`` :: cr> SELECT ... object_keys({a = 1, b = {c = 2}}) AS object_keys; +-------------+ | object_keys | +-------------+ | ["a", "b"] | +-------------+ SELECT 1 row in set (... sec) .. _scalar-concat-object: ``concat(object, object)`` -------------------------- The ``concat(object, object)`` function combines two objects into a new object containing the union of their first level properties, taking the second object's values for duplicate properties. If one of the objects is ``NULL``, the function returns the non-``NULL`` object. If both objects are ``NULL``, the function returns ``NULL``. Returns: ``object`` :: cr> SELECT ... concat({a = 1}, {a = 2, b = {c = 2}}) AS object_concat; +-------------------------+ | object_concat | +-------------------------+ | {"a": 2, "b": {"c": 2}} | +-------------------------+ SELECT 1 row in set (... sec) You can also use the concat :ref:`operator ` ``||`` with objects:: cr> SELECT ... {a = 1} || {b = 2} || {c = 3} AS object_concat; +--------------------------+ | object_concat | +--------------------------+ | {"a": 1, "b": 2, "c": 3} | +--------------------------+ SELECT 1 row in set (... sec) .. NOTE:: ``concat(object, object)`` does not operate recursively: only the top-level object structure is merged:: cr> SELECT ... concat({a = {b = 4}}, {a = {c = 2}}) as object_concat; +-----------------+ | object_concat | +-----------------+ | {"a": {"c": 2}} | +-----------------+ SELECT 1 row in set (... sec) .. _scalar-null-or-empty-object: ``null_or_empty(object)`` ------------------------- The ``null_or_empty(object)`` function returns a Boolean indicating if an object is ``NULL`` or empty (``{}``). This can serve as a faster alternative to ``IS NULL`` if matching on empty objects is acceptable. It makes better use of indices. :: cr> SELECT null_or_empty({}) x, null_or_empty(NULL) y, null_or_empty({x=10}) z; +------+------+-------+ | x | y | z | +------+------+-------+ | TRUE | TRUE | FALSE | +------+------+-------+ SELECT 1 row in set (... sec) .. _scalar-conditional-fn-exp: Conditional functions and expressions ===================================== .. _scalar-case-when-then-end: ``CASE WHEN ... THEN ... END`` ------------------------------ The ``case`` :ref:`expression ` is a generic conditional expression similar to if/else statements in other programming languages and can be used wherever an expression is valid. :: CASE WHEN condition THEN result [WHEN ...] [ELSE result] END Each *condition* expression must result in a boolean value. If the condition's result is true, the value of the *result* expression that follows the condition will be the final result of the ``case`` expression and the subsequent ``when`` branches will not be processed. If the condition's result is not true, any subsequent ``when`` clauses are examined in the same manner. If no ``when`` condition yields true, the value of the ``case`` expression is the result of the ``else`` clause. If the ``else`` clause is omitted and no condition is true, the result is null. .. Hidden: create table case_example cr> create table case_example (id bigint); CREATE OK, 1 row affected (... sec) cr> insert into case_example (id) values (0),(1),(2),(3); INSERT OK, 4 rows affected (... sec) cr> refresh table case_example REFRESH OK, 1 row affected (... sec) Example:: cr> select id, ... case when id = 0 then 'zero' ... when id % 2 = 0 then 'even' ... else 'odd' ... end as parity ... from case_example order by id; +----+--------+ | id | parity | +----+--------+ | 0 | zero | | 1 | odd | | 2 | even | | 3 | odd | +----+--------+ SELECT 4 rows in set (... sec) As a variant, a ``case`` expression can be written using the *simple* form:: CASE expression WHEN value THEN result [WHEN ...] [ELSE result] END Example:: cr> select id, ... case id when 0 then 'zero' ... when 1 then 'one' ... else 'other' ... end as description ... from case_example order by id; +----+-------------+ | id | description | +----+-------------+ | 0 | zero | | 1 | one | | 2 | other | | 3 | other | +----+-------------+ SELECT 4 rows in set (... sec) .. NOTE:: All *result* expressions must be convertible to a single data type. .. Hidden: drop table case_example cr> drop table case_example; DROP OK, 1 row affected (... sec) .. _scalar-if: ``if(condition, result [, default])`` ------------------------------------- The ``if`` function is a conditional function comparing to *if* statements of most other programming languages. If the given *condition* :ref:`expression ` :ref:`evaluates ` to ``true``, the *result* expression is evaluated and its value is returned. If the *condition* evaluates to ``false``, the *result* expression is not evaluated and the optional given *default* expression is evaluated instead and its value will be returned. If the *default* argument is omitted, ``NULL`` will be returned instead. .. Hidden: create table if_example cr> create table if_example (id bigint); CREATE OK, 1 row affected (... sec) cr> insert into if_example (id) values (0),(1),(2),(3); INSERT OK, 4 rows affected (... sec) cr> refresh table if_example REFRESH OK, 1 row affected (... sec) :: cr> select ... id, ... if(id = 0, 'zero', 'other') as description ... from if_example ... order by id; +----+-------------+ | id | description | +----+-------------+ | 0 | zero | | 1 | other | | 2 | other | | 3 | other | +----+-------------+ SELECT 4 rows in set (... sec) .. Hidden: drop table if_example cr> drop table if_example; DROP OK, 1 row affected (... sec) .. _scalar-coalesce: ``coalesce('first_arg', second_arg [, ... ])`` ---------------------------------------------- The ``coalesce`` function takes one or more arguments of the same type and returns the first non-null value of these. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select coalesce(clustered_by, 'nothing') AS clustered_by ... from information_schema.tables ... where table_name='nodes'; +--------------+ | clustered_by | +--------------+ | nothing | +--------------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``coalesce`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-greatest: ``greatest('first_arg', second_arg[ , ... ])`` ---------------------------------------------- The ``greatest`` function takes one or more arguments of the same type and will return the largest value of these. NULL values in the arguments list are ignored. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select greatest(1, 2) AS greatest; +----------+ | greatest | +----------+ | 2 | +----------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``greatest`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-least: ``least('first_arg', second_arg[ , ... ])`` ------------------------------------------- The ``least`` function takes one or more arguments of the same type and will return the smallest value of these. NULL values in the arguments list are ignored. The result will be NULL only if all the arguments :ref:`evaluate ` to NULL. Returns: same type as arguments :: cr> select least(1, 2) AS least; +-------+ | least | +-------+ | 1 | +-------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``least`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-nullif: ``nullif('first_arg', second_arg)`` ----------------------------------- The ``nullif`` function compares two arguments of the same type and, if they have the same value, returns NULL; otherwise returns the first argument. Returns: same type as arguments :: cr> select nullif(table_schema, 'sys') AS nullif ... from information_schema.tables ... where table_name='nodes'; +--------+ | nullif | +--------+ | NULL | +--------+ SELECT 1 row in set (... sec) .. NOTE:: If the data types of the arguments are not of the same type, ``nullif`` will try to cast them to a common type, and if it fails to do so, an error is thrown. .. _scalar-sysinfo: System information functions ============================ .. _scalar-current_schema: ``CURRENT_SCHEMA`` ------------------ The ``CURRENT_SCHEMA`` system information function returns the name of the current schema of the session. If no current schema is set, this function will return the default schema, which is ``doc``. Returns: ``text`` The default schema can be set when using the `JDBC client `_ and :ref:`HTTP clients ` such as `CrateDB PDO`_. .. NOTE:: The ``CURRENT_SCHEMA`` function has a special SQL syntax, meaning that it must be called without trailing parenthesis (``()``). However, CrateDB also supports the optional parenthesis. Synopsis:: CURRENT_SCHEMA [ ( ) ] Example:: cr> SELECT CURRENT_SCHEMA; +----------------+ | current_schema | +----------------+ | doc | +----------------+ SELECT 1 row in set (... sec) .. _scalar-current_schemas: ``CURRENT_SCHEMAS(boolean)`` ---------------------------- The ``CURRENT_SCHEMAS()`` system information function returns the current stored schemas inside the :ref:`search_path ` session state, optionally including implicit schemas (e.g. ``pg_catalog``). If no custom :ref:`search_path ` is set, this function will return the default :ref:`search_path ` schemas. Returns: ``array(text)`` Synopsis:: CURRENT_SCHEMAS ( boolean ) Example:: cr> SELECT CURRENT_SCHEMAS(true) AS schemas; +-----------------------+ | schemas | +-----------------------+ | ["pg_catalog", "doc"] | +-----------------------+ SELECT 1 row in set (... sec) .. _scalar-current_user: ``CURRENT_USER`` ---------------- The ``CURRENT_USER`` system information function returns the name of the current connected user or ``crate`` if the user management module is disabled. Returns: ``text`` Synopsis:: CURRENT_USER Example:: cr> select current_user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-current_role: ``CURRENT_ROLE`` ---------------- Equivalent to `CURRENT_USER`_. Returns: ``text`` Synopsis:: CURRENT_ROLE Example:: cr> select current_role AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-user: ``USER`` -------- Equivalent to `CURRENT_USER`_. Returns: ``text`` Synopsis:: USER Example:: cr> select user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-session_user: ``SESSION_USER`` ---------------- The ``SESSION_USER`` system information function returns the name of the current connected user or ``crate`` if the user management module is disabled. Returns: ``text`` Synopsis:: SESSION_USER Example:: cr> select session_user AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. NOTE:: CrateDB doesn't currently support the switching of execution context. This makes `SESSION_USER`_ functionally equivalent to `CURRENT_USER`_. We provide it as it's part of the SQL standard. Additionally, the `CURRENT_USER`_, `SESSION_USER`_ and `USER`_ functions have a special SQL syntax, meaning that they must be called without trailing parenthesis (``()``). .. _scalar-has-database-priv: ``has_database_privilege([user,] database, privilege text)`` ------------------------------------------------------------ Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` database name or ``INTEGER`` database OID. .. NOTE:: Only `crate` is valid for database name and only `0` is valid for database OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``CONNECT``, ``CREATE`` and ``TEMP`` or ``TEMPORARY``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. :CONNECT: is ``true`` for all defined users in the database :CREATE: is ``true`` if the user has any ``DDL`` privilege on ``CLUSTER`` or on any ``SCHEMA`` :TEMP: is ``false`` for all users Example:: cr> select has_database_privilege('crate', ' Connect , CREATe ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. _scalar-has-schema-priv: ``has_schema_privilege([user,] schema, privilege text)`` -------------------------------------------------------- Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` schema name or ``INTEGER`` schema OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``CREATE`` and ``USAGE`` which corresponds to CrateDB's ``DDL`` and ``DQL``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. Example:: cr> select has_schema_privilege('pg_catalog', ' Create , UsaGe , CREATe ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. NOTE:: For unknown schemas: - Returns ``TRUE`` for superusers. - For a user with ``DQL`` on cluster scope, returns ``TRUE`` if the privilege type is ``USAGE``. - For a user with ``DML`` on cluster scope, returns ``TRUE`` if the privilege type is ``CREATE``. - Returns ``FALSE`` otherwise. .. _scalar-has-table-priv: ``has_table_privilege([user,] table, privilege text)`` ------------------------------------------------------ Returns ``boolean`` or ``NULL`` if at least one argument is ``NULL``. First argument is ``TEXT`` user name or ``INTEGER`` user OID. If user is not specified current user is used as an argument. Second argument is ``TEXT`` table name or ``INTEGER`` table OID. Third argument is privilege(s) to check. Multiple privileges can be provided as a comma separated list, in which case the result will be ``true`` if any of the listed privileges is held. Allowed privilege types are ``SELECT`` which corresponds to CrateDB's ``DQL`` and ``INSERT``, ``UPDATE``, ``DELETE`` which all correspond to CrateDB's ``DML``. Privilege string is case insensitive and extra whitespace is allowed between privilege names. Duplicate entries in privilege string are allowed. Example:: cr> select has_table_privilege('sys.summits', ' Select ') ... as has_priv; +----------+ | has_priv | +----------+ | TRUE | +----------+ SELECT 1 row in set (... sec) .. NOTE:: For unknown tables: - Returns ``TRUE`` for superusers. - For a user with ``DQL`` on cluster scope, returns ``TRUE`` if the privilege type is ``SELECT``. - For a user with ``DML`` on cluster scope, returns ``TRUE`` if the privilege type is ``INSERT``, ``UPDATE`` or ``DELETE``. - For a user with ``DQL`` on the schema, returns ``TRUE`` if the privilege type is ``SELECT``. - For a user with ``DML`` on the schema, returns ``TRUE`` if the privilege type is ``INSERT``, ``UPDATE`` or ``DELETE``. - Returns ``FALSE`` otherwise. .. _scalar-pg_backend_pid: ``pg_backend_pid()`` -------------------- The ``pg_backend_pid()`` system information function is implemented for enhanced compatibility with PostgreSQL. CrateDB will always return ``-1`` as there isn't a single process attached to one query. This is different to PostgreSQL, where this represents the process ID of the server process attached to the current session. Returns: ``integer`` Synopsis:: pg_backend_pid() Example:: cr> select pg_backend_pid() AS pid; +-----+ | pid | +-----+ | -1 | +-----+ SELECT 1 row in set (... sec) .. _scalar-pg_postmaster_start_time: ``pg_postmaster_start_time()`` ------------------------------ Returns the server start time as ``timestamp with time zone``. .. _scalar-current_database: ``current_database()`` ---------------------- The ``current_database`` function returns the name of the current database, which in CrateDB will always be ``crate``:: cr> select current_database() AS db; +-------+ | db | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-current_setting: ``current_setting(text [,boolean])`` ------------------------------------ The ``current_setting`` function returns the current value of a :ref:`session setting `. Returns: ``text`` Synopsis:: current_setting(setting_name [, missing_ok]) If no setting exists for ``setting_name``, current_setting throws an error, unless ``missing_ok`` argument is provided and is true. Examples:: cr> select current_setting('search_path') AS search_path; +-------------+ | search_path | +-------------+ | doc | +-------------+ SELECT 1 row in set (... sec) :: cr> select current_setting('foo'); SQLParseException[Unrecognised Setting: foo] :: cr> select current_setting('foo', true) AS foo; +------+ | foo | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_expr: ``pg_get_expr()`` ----------------- The function ``pg_get_expr`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns ``null``. Synopsis:: pg_get_expr(expr text, relation_oid int [, pretty boolean]) Example:: cr> select pg_get_expr('literal', 1) AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_partkeydef: ``pg_get_partkeydef()`` ----------------------- The function ``pg_get_partkeydef`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. Partitioning in CrateDB is different from PostgreSQL, therefore this function always returns ``null``. Synopsis:: pg_get_partkeydef(relation_oid int) Example:: cr> select pg_get_partkeydef(1) AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_serial_sequence: ``pg_get_serial_sequence()`` ---------------------------- The function ``pg_get_serial_sequence`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns ``null``. Existence of tables or columns is not validated. Synopsis:: pg_get_serial_sequence(table_name text, column_name text) Example:: cr> select pg_get_serial_sequence('t1', 'c1') AS col; +------+ | col | +------+ | NULL | +------+ SELECT 1 row in set (... sec) .. _scalar-pg_encoding_to_char: ``pg_encoding_to_char()`` ------------------------- The function ``pg_encoding_to_char`` converts an PostgreSQL encoding's internal identifier to a human-readable name. Returns: ``text`` Synopsis:: pg_encoding_to_char(encoding int) Example:: cr> select pg_encoding_to_char(6) AS encoding; +----------+ | encoding | +----------+ | UTF8 | +----------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_userbyid: ``pg_get_userbyid()`` --------------------- The function ``pg_get_userbyid`` is implemented to improve compatibility with clients that use the PostgreSQL wire protocol. The function always returns the default CrateDB user for non-null arguments, otherwise, ``null`` is returned. Returns: ``text`` Synopsis:: pg_get_userbyid(id integer) Example:: cr> select pg_get_userbyid(-450373579) AS name; +-------+ | name | +-------+ | crate | +-------+ SELECT 1 row in set (... sec) .. _scalar-pg_typeof: ``pg_typeof()`` --------------- The function ``pg_typeof`` returns the text representation of the value's data type passed to it. Returns: ``text`` Synopsis:: pg_typeof(expression) Example: :: cr> select pg_typeof([1, 2, 3]) as typeof; +---------------+ | typeof | +---------------+ | integer_array | +---------------+ SELECT 1 row in set (... sec) .. _scalar-pg_function_is_visible: ``pg_function_is_visible()`` ---------------------------- The function ``pg_function_is_visible`` returns true for OIDs that refer to a system or a user defined function. Returns: ``boolean`` Synopsis:: pg_function_is_visible(OID) Example: :: cr> select pg_function_is_visible(-919555782) as pg_function_is_visible; +------------------------+ | pg_function_is_visible | +------------------------+ | TRUE | +------------------------+ SELECT 1 row in set (... sec) .. _scalar-pg_table_is_visible: ``pg_table_is_visible()`` ------------------------- The function ``pg_table_is_visible`` accepts an OID as an argument. It returns ``true`` if the current user holds at least one of ``DQL``, ``DDL`` or ``DML`` privilege on the table or view referred by the OID and there are no other tables or views with the same name and privileges but with different schema names appearing earlier in the search path. Returns: ``boolean`` Example: :: cr> select pg_table_is_visible(912037690) as is_visible; +------------+ | is_visible | +------------+ | TRUE | +------------+ SELECT 1 row in set (... sec) .. _scalar-pg_get_function_result: ``pg_get_function_result()`` ---------------------------- The function ``pg_get_function_result`` returns the text representation of the return value's data type of the function referred by the OID. Returns: ``text`` Synopsis:: pg_get_function_result(OID) Example: :: cr> select pg_get_function_result(-919555782) as _pg_get_function_result; +-------------------------+ | _pg_get_function_result | +-------------------------+ | time with time zone | +-------------------------+ SELECT 1 row in set (... sec) .. _scalar-version: ``version()`` ------------- Returns the CrateDB version information. Returns: ``text`` Synopsis:: version() Example: :: cr> select version() AS version; +---------...-+ | version | +---------...-+ | CrateDB ... | +---------...-+ SELECT 1 row in set (... sec) .. _scalar-col_description: ``col_description(integer, integer)`` ------------------------------------- This function exists mainly for compatibility with PostgreSQL. In PostgreSQL, the function returns the comment for a table column. CrateDB doesn't support user defined comments for table columns, so it always returns ``null``. Returns: ``text`` Example: :: cr> SELECT pg_catalog.col_description(1, 1) AS comment; +---------+ | comment | +---------+ | NULL | +---------+ SELECT 1 row in set (... sec) .. _scalar-obj_description: ``obj_description(integer, text)`` ---------------------------------- This function exists mainly for compatibility with PostgreSQL. In PostgreSQL, the function returns the comment for a database object. CrateDB doesn't support user defined comments for database objects, so it always returns ``null``. Returns: ``text`` Example: :: cr> SELECT pg_catalog.obj_description(1, 'pg_type') AS comment; +---------+ | comment | +---------+ | NULL | +---------+ SELECT 1 row in set (... sec) .. _scalar-format_type: ``format_type(integer, integer)`` --------------------------------- Returns the type name of a type. The first argument is the ``OID`` of the type. The second argument is the type modifier. This function exists for PostgreSQL compatibility and the type modifier is always ignored. Returns: ``text`` Example: :: cr> SELECT pg_catalog.format_type(25, null) AS name; +------+ | name | +------+ | text | +------+ SELECT 1 row in set (... sec) If the given ``OID`` is not know, ``???`` is returned:: cr> SELECT pg_catalog.format_type(3, null) AS name; +------+ | name | +------+ | ??? | +------+ SELECT 1 row in set (... sec) .. _scalar-special: Special functions ================= .. _scalar_knn_match: ``knn_match(float_vector, float_vector, int)`` ---------------------------------------------- The ``knn_match`` function uses a k-nearest neighbour (kNN) search algorithm to find vectors that are similar to a query vector. The first argument is the column to search. The second argument is the query vector. The third argument is the number of nearest neighbours to search in the index. Searching a larger number of nearest neighbours is more expensive. There is one index per shard, and on each shard the function will match at most `k` records. To limit the total query result, add a :ref:`LIMIT clause ` to the query. ``knn_match(search_vector, target, k)`` This function must be used within a ``WHERE`` clause targeting a table to use it as a predicate that searches the whole dataset of a table. Using it *outside* of a ``WHERE`` clause, or in a ``WHERE`` clause targeting a virtual table instead of a physical table, results in an error. Similar to the :ref:`MATCH predicate `, this function affects the :ref:`_score ` value. An example:: cr> CREATE TABLE IF NOT EXISTS doc.vectors ( ... xs float_vector(2) ... ); CREATE OK, 1 row affected (... sec) cr> INSERT INTO doc.vectors (xs) ... VALUES ... ([3.14, 8.17]), ... ([14.3, 19.4]); INSERT OK, 2 rows affected (... sec) .. HIDE: cr> REFRESH TABLE doc.vectors; REFRESH OK, 1 row affected (... sec) :: cr> SELECT xs, _score FROM doc.vectors ... WHERE knn_match(xs, [3.14, 8], 2) ... ORDER BY _score DESC; +--------------+--------------+ | xs | _score | +--------------+--------------+ | [3.14, 8.17] | 0.9719117 | | [14.3, 19.4] | 0.0039138086 | +--------------+--------------+ SELECT 2 rows in set (... sec) .. _scalar-ignore3vl: ``ignore3vl(boolean)`` ---------------------- The ``ignore3vl`` function operates on a boolean argument and eliminates the `3-valued logic`_ on the whole tree of :ref:`operators ` beneath it. More specifically, ``FALSE`` is :ref:`evaluated ` to ``FALSE``, ``TRUE`` to ``TRUE`` and ``NULL`` to ``FALSE``. Returns: ``boolean`` .. HIDE: cr> CREATE TABLE IF NOT EXISTS doc.t( ... int_array_col array(integer) ... ); CREATE OK, 1 row affected (... sec) cr> INSERT INTO doc.t(int_array_col) ... VALUES ([1,2,3, null]); INSERT OK, 1 row affected (... sec) cr> REFRESH table doc.t; REFRESH OK, 1 row affected (... sec) .. NOTE:: The main usage of the ``ignore3vl`` function is in the ``WHERE`` clause when a ``NOT`` operator is involved. Such filtering, with `3-valued logic`_, cannot be translated to an optimized query in the internal storage engine, and therefore can degrade performance. E.g.:: SELECT * FROM t WHERE NOT 5 = ANY(t.int_array_col); If we can ignore the `3-valued logic`_, we can write the query as:: SELECT * FROM t WHERE NOT IGNORE3VL(5 = ANY(t.int_array_col)); which will yield better performance (in execution time) than before. .. CAUTION:: If there are ``NULL`` values in the ``long_array_col``, in the case that ``5 = ANY(t.long_array_col)`` evaluates to ``NULL``, without the ``ignore3vl``, it would be evaluated as ``NOT NULL`` => ``NULL``, resulting to zero matched rows. With the ``IGNORE3VL`` in place it will be evaluated as ``NOT FALSE`` => ``TRUE`` resulting to all rows matching the filter. E.g:: cr> SELECT * FROM t ... WHERE NOT 5 = ANY(t.int_array_col); +---------------+ | int_array_col | +---------------+ +---------------+ SELECT 0 rows in set (... sec) :: cr> SELECT * FROM t ... WHERE NOT IGNORE3VL(5 = ANY(t.int_array_col)); +-----------------+ | int_array_col | +-----------------+ | [1, 2, 3, null] | +-----------------+ SELECT 1 row in set (... sec) .. HIDE: cr> DROP TABLE IF EXISTS doc.t; DROP OK, 1 row affected (... sec) Synopsis:: ignore3vl(boolean) Example:: cr> SELECT ... ignore3vl(true) as v1, ... ignore3vl(false) as v2, ... ignore3vl(null) as v3; +------+-------+-------+ | v1 | v2 | v3 | +------+-------+-------+ | TRUE | FALSE | FALSE | +------+-------+-------+ SELECT 1 row in set (... sec) .. _scalar-vector: Vector functions ================ .. _scalar_vector_similarity: ``vector_similarity(float_vector, float_vector)`` -------------------------------------------------------- Returns similarity of 2 :ref:`FLOAT_VECTORS ` as a :ref:`FLOAT ` typed value. Similarity is based on euclidean distance and belongs to range ``(0,1]``. If 2 vectors coincide, function returns maximal possible similarity 1. The more distance between vectors is, the closer similarity gets to 0. If at least one argument is ``NULL``, function returns ``NULL``. An example:: cr> SELECT vector_similarity([1.2, 1.3], [10.2, 10.3]) AS vs; +-------------+ | vs | +-------------+ | 0.006134969 | +-------------+ SELECT 1 row in set (... sec) .. _3-valued logic: https://en.wikipedia.org/wiki/Null_(SQL)#Comparisons_with_NULL_and_the_three-valued_logic_(3VL) .. _available time zones: https://www.joda.org/joda-time/timezones.html .. _CrateDB PDO: https://crate.io/docs/pdo/en/latest/connect.html .. _Euclidean geometry: https://en.wikipedia.org/wiki/Euclidean_geometry .. _formatter: https://docs.oracle.com/javase/7/docs/api/java/util/Formatter.html .. _geodetic: https://en.wikipedia.org/wiki/Geodesy .. _GeoJSON: https://geojson.org/ .. _Haversine formula: https://en.wikipedia.org/wiki/Haversine_formula .. _Java DateTimeFormatter: https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html .. _Java DecimalFormat: https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html .. _Java Regular Expressions: https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html .. _Joda-Time: https://www.joda.org/joda-time/ .. _Lucene Regular Expressions: https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/util/automaton/RegExp.html .. _MySQL date_format: https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_date-format .. _WKT: https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry.. highlight:: psql .. _aggregation: =========== Aggregation =========== When :ref:`selecting data ` from CrateDB, you can use an `aggregate function`_ to calculate a single summary value for one or more columns. For example:: cr> SELECT count(*) FROM locations; +----------+ | count(*) | +----------+ | 13 | +----------+ SELECT 1 row in set (... sec) Here, the :ref:`count(*) ` function computes the result across all rows. Aggregate :ref:`functions ` can be used with the :ref:`sql_dql_group_by` clause. When used like this, an aggregate function returns a single summary value for each grouped collection of column values. For example:: cr> SELECT kind, count(*) FROM locations GROUP BY kind; +-------------+----------+ | kind | count(*) | +-------------+----------+ | Galaxy | 4 | | Star System | 4 | | Planet | 5 | +-------------+----------+ SELECT 3 rows in set (... sec) .. TIP:: Aggregation works across all the rows that match a query or on all matching rows in every distinct group of a ``GROUP BY`` statement. Aggregating ``SELECT`` statements without ``GROUP BY`` will always return one row. .. rubric:: Table of contents .. contents:: :local: .. _aggregation-expressions: Aggregate expressions ===================== An *aggregate expression* represents the application of an :ref:`aggregate function ` across rows selected by a query. Besides the function signature, :ref:`expressions ` might contain supplementary clauses and keywords. The synopsis of an aggregate expression is one of the following:: aggregate_function ( * ) [ FILTER ( WHERE condition ) ] aggregate_function ( [ DISTINCT ] expression [ , ... ] ) [ FILTER ( WHERE condition ) ] Here, ``aggregate_function`` is a name of an aggregate function and ``expression`` is a column reference, :ref:`scalar function ` or literal. If ``FILTER`` is specified, then only the rows that met the :ref:`sql_dql_where_clause` condition are supplied to the aggregate function. The optional ``DISTINCT`` keyword is only supported by aggregate functions that explicitly mention its support. Please refer to existing :ref:`limitations ` for further information. The aggregate expression form that uses a ``wildcard`` instead of an ``expression`` as a function argument is supported only by the ``count(*)`` aggregate function. .. _aggregation-functions: Aggregate functions =================== .. _aggregation-arbitrary: ``arbitrary(column)`` --------------------- The ``arbitrary`` aggregate function returns a single value of a column. Which value it returns is not defined. Its return type is the type of its parameter column and can be ``NULL`` if the column contains ``NULL`` values. Example:: cr> select arbitrary(position) from locations; +---------------------+ | arbitrary(position) | +---------------------+ | ... | +---------------------+ SELECT 1 row in set (... sec) :: cr> select arbitrary(name), kind from locations ... where name != '' ... group by kind order by kind desc; +-...-------------+-------------+ | arbitrary(name) | kind | +-...-------------+-------------+ | ... | Star System | | ... | Planet | | ... | Galaxy | +-...-------------+-------------+ SELECT 3 rows in set (... sec) An example use case is to group a table with many rows per user by ``user_id`` and get the ``username`` for every group, that means every user. This works as rows with same ``user_id`` have the same ``username``. This method performs better than grouping on ``username`` as grouping on number types is generally faster than on strings. The advantage is that the ``arbitrary`` function does very little to no computation as for example ``max`` aggregate function would do. .. _aggregation-any-value: ``any_value(column)`` --------------------- ``any_value`` is an alias for :ref:`arbitrary `. Example:: cr> select any_value(x) from unnest([1, 1]) t (x); +--------------+ | any_value(x) | +--------------+ | 1 | +--------------+ SELECT 1 row in set (... sec) .. _aggregation-array-agg: ``array_agg(column)`` --------------------- The ``array_agg`` aggregate function concatenates all input values into an array. :: cr> SELECT array_agg(x) FROM (VALUES (42), (832), (null), (17)) as t (x); +---------------------+ | array_agg(x) | +---------------------+ | [42, 832, null, 17] | +---------------------+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`aggregation-string-agg` .. _aggregation-avg: ``avg(column)`` --------------- The ``avg`` and ``mean`` aggregate function returns the arithmetic mean, the *average*, of all values in a column that are not ``NULL``. It accepts all numeric, timestamp and interval types as single argument. For ``numeric`` argument type the return type is ``numeric``, for ``interval`` argument type the return type is ``interval`` and for other argument type the return type is ``double``. Example:: cr> select avg(position), kind from locations ... group by kind order by kind; +---------------+-------------+ | avg(position) | kind | +---------------+-------------+ | 3.25 | Galaxy | | 3.0 | Planet | | 2.5 | Star System | +---------------+-------------+ SELECT 3 rows in set (... sec) The ``avg`` aggregation on the ``bigint`` column might result in a precision error if sum of elements exceeds 2^53:: cr> select avg(t.val) from ... (select unnest([9223372036854775807, 9223372036854775807]) as val) t; +-----------------------+ | avg(val) | +-----------------------+ | 9.223372036854776e+18 | +-----------------------+ SELECT 1 row in set (... sec) To address the precision error of the avg aggregation, we cast the aggregation column to the ``numeric`` data type:: cr> select avg(t.val :: numeric) from ... (select unnest([9223372036854775807, 9223372036854775807]) as val) t; +---------------------------+ | avg(cast(val AS NUMERIC)) | +---------------------------+ | 9223372036854775807 | +---------------------------+ SELECT 1 row in set (... sec) .. _aggregation-avg-distinct: ``avg(DISTINCT column)`` ~~~~~~~~~~~~~~~~~~~~~~~~ The ``avg`` aggregate function also supports the ``distinct`` keyword. This keyword changes the behaviour of the function so that it will only average the number of distinct values in this column that are not ``NULL``:: cr> select ... avg(distinct position) AS avg_pos, ... count(*), ... date ... from locations group by date ... order by 1 desc, count(*) desc; +---------+----------+---------------+ | avg_pos | count(*) | date | +---------+----------+---------------+ | 4.0 | 1 | 1367366400000 | | 3.6 | 8 | 1373932800000 | | 2.0 | 4 | 308534400000 | +---------+----------+---------------+ SELECT 3 rows in set (... sec) :: cr> select avg(distinct position) AS avg_pos from locations; +---------+ | avg_pos | +---------+ | 3.5 | +---------+ SELECT 1 row in set (... sec) .. _aggregation-count: ``count(column)`` ----------------- In contrast to the :ref:`aggregation-count-star` function the ``count`` function used with a column name as parameter will return the number of rows with a non-``NULL`` value in that column. Example:: cr> select count(name), count(*), date from locations group by date ... order by count(name) desc, count(*) desc; +-------------+----------+---------------+ | count(name) | count(*) | date | +-------------+----------+---------------+ | 7 | 8 | 1373932800000 | | 4 | 4 | 308534400000 | | 1 | 1 | 1367366400000 | +-------------+----------+---------------+ SELECT 3 rows in set (... sec) .. _aggregation-count-distinct: ``count(DISTINCT column)`` ~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``count`` aggregate function also supports the ``distinct`` keyword. This keyword changes the behaviour of the function so that it will only count the number of distinct values in this column that are not ``NULL``:: cr> select ... count(distinct kind) AS num_kind, ... count(*), ... date ... from locations group by date ... order by num_kind, count(*) desc; +----------+----------+---------------+ | num_kind | count(*) | date | +----------+----------+---------------+ | 1 | 1 | 1367366400000 | | 3 | 8 | 1373932800000 | | 3 | 4 | 308534400000 | +----------+----------+---------------+ SELECT 3 rows in set (... sec) :: cr> select count(distinct kind) AS num_kind from locations; +----------+ | num_kind | +----------+ | 3 | +----------+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`aggregation-hyperloglog-distinct` for an alternative that trades some accuracy for improved performance. .. _aggregation-count-star: ``count(*)`` ~~~~~~~~~~~~ This aggregate function simply returns the number of rows that match the query. ``count(columName)`` is also possible, but currently only works on a primary key column. The semantics are the same. The return value is always of type ``bigint``. :: cr> select count(*) from locations; +----------+ | count(*) | +----------+ | 13 | +----------+ SELECT 1 row in set (... sec) ``count(*)`` can also be used on group by queries:: cr> select count(*), kind from locations group by kind order by kind asc; +----------+-------------+ | count(*) | kind | +----------+-------------+ | 4 | Galaxy | | 5 | Planet | | 4 | Star System | +----------+-------------+ SELECT 3 rows in set (... sec) .. _aggregation-geometric-mean: ``geometric_mean(column)`` -------------------------- The ``geometric_mean`` aggregate function computes the geometric mean, a mean for positive numbers. For details see: `Geometric Mean`_. ``geometric mean`` is defined on all numeric types and on timestamp. It always returns double values. If a value is negative, all values were null or we got no value at all ``NULL`` is returned. If any of the aggregated values is ``0`` the result will be ``0.0`` as well. .. CAUTION:: Due to java double precision arithmetic it is possible that any two executions of the aggregate function on the same data produce slightly differing results. Example:: cr> select geometric_mean(position), kind from locations ... group by kind order by kind; +--------------------------+-------------+ | geometric_mean(position) | kind | +--------------------------+-------------+ | 2.6321480259049848 | Galaxy | | 2.6051710846973517 | Planet | | 2.213363839400643 | Star System | +--------------------------+-------------+ SELECT 3 rows in set (... sec) .. _aggregation-hyperloglog-distinct: ``hyperloglog_distinct(column, [precision])`` --------------------------------------------- The ``hyperloglog_distinct`` aggregate function calculates an approximate count of distinct non-null values using the `HyperLogLog++`_ algorithm. The return value data type is always a ``bigint``. The first argument can be a reference to a column of all :ref:`data-types-primitive`. :ref:`data-types-container` and :ref:`data-types-geo` are not supported. The optional second argument defines the used ``precision`` for the `HyperLogLog++`_ algorithm. This allows to trade memory for accuracy, valid values are ``4`` to ``18``. A precision of ``4`` uses approximately ``16`` bytes of memory. Each increase in precision doubles the memory requirement. So precision ``5`` uses approximately ``32`` bytes, up to ``262144`` bytes for precision ``18``. The default value for the ``precision`` which is used if the second argument is left out is ``14``. Examples:: cr> select hyperloglog_distinct(position) from locations; +--------------------------------+ | hyperloglog_distinct(position) | +--------------------------------+ | 6 | +--------------------------------+ SELECT 1 row in set (... sec) :: cr> select hyperloglog_distinct(position, 4) from locations; +-----------------------------------+ | hyperloglog_distinct(position, 4) | +-----------------------------------+ | 6 | +-----------------------------------+ SELECT 1 row in set (... sec) .. _aggregation-mean: ``mean(column)`` ---------------- An alias for :ref:`aggregation-avg`. .. _aggregation-min: ``min(column)`` --------------- The ``min`` aggregate function returns the smallest value in a column that is not ``NULL``. Its single argument is a column name and its return value is always of the type of that column. Example:: cr> select min(position), kind ... from locations ... where name not like 'North %' ... group by kind order by min(position) asc, kind asc; +---------------+-------------+ | min(position) | kind | +---------------+-------------+ | 1 | Planet | | 1 | Star System | | 2 | Galaxy | +---------------+-------------+ SELECT 3 rows in set (... sec) :: cr> select min(date) from locations; +--------------+ | min(date) | +--------------+ | 308534400000 | +--------------+ SELECT 1 row in set (... sec) ``min`` returns ``NULL`` if the column does not contain any value but ``NULL``. It is allowed on columns with primitive data types. On ``text`` columns it will return the lexicographically smallest. :: cr> select min(name), kind from locations ... group by kind order by kind asc; +------------------------------------+-------------+ | min(name) | kind | +------------------------------------+-------------+ | Galactic Sector QQ7 Active J Gamma | Galaxy | | | Planet | | Aldebaran | Star System | +------------------------------------+-------------+ SELECT 3 rows in set (... sec) .. _aggregation-max: ``max(column)`` --------------- It behaves exactly like ``min`` but returns the biggest value in a column that is not ``NULL``. Some Examples:: cr> select max(position), kind from locations ... group by kind order by kind desc; +---------------+-------------+ | max(position) | kind | +---------------+-------------+ | 4 | Star System | | 5 | Planet | | 6 | Galaxy | +---------------+-------------+ SELECT 3 rows in set (... sec) :: cr> select max(position) from locations; +---------------+ | max(position) | +---------------+ | 6 | +---------------+ SELECT 1 row in set (... sec) :: cr> select max(name), kind from locations ... group by kind order by max(name) desc; +-------------------+-------------+ | max(name) | kind | +-------------------+-------------+ | Outer Eastern Rim | Galaxy | | Bartledan | Planet | | Altair | Star System | +-------------------+-------------+ SELECT 3 rows in set (... sec) .. _aggregation-max_by: ``max_by(returnField, searchField)`` ------------------------------------ Returns the value of ``returnField`` where ``searchField`` has the highest value. If there are ties for ``searchField`` the result is non-deterministic and can be any of the ``returnField`` values of the ties. ``NULL`` values in the ``searchField`` don't count as max but are skipped. An Example:: cr> SELECT max_by(mountain, height) FROM sys.summits; +--------------------------+ | max_by(mountain, height) | +--------------------------+ | Mont Blanc | +--------------------------+ SELECT 1 row in set (... sec) .. _aggregation-min_by: ``min_by(returnField, searchField)`` ------------------------------------ Returns the value of ``returnField`` where ``searchField`` has the lowest value. If there are ties for ``searchField`` the result is non-deterministic and can be any of the ``returnField`` values of the ties. ``NULL`` values in the ``searchField`` don't count as min but are skipped. An Example:: cr> SELECT min_by(mountain, height) FROM sys.summits; +--------------------------+ | min_by(mountain, height) | +--------------------------+ | Puy de Rent | +--------------------------+ SELECT 1 row in set (... sec) .. _aggregation-stddev: ``stddev(column)`` ------------------ The ``stddev`` aggregate function computes the `Standard Deviation`_ of the set of non-null values in a column. It is a measure of the variation of data values. A low standard deviation indicates that the values tend to be near the mean. ``stddev`` is defined on all numeric types and on timestamp. It always returns ``double precision`` values. If all values were null or we got no value at all ``NULL`` is returned. Example:: cr> select stddev(position), kind from locations ... group by kind order by kind; +--------------------+-------------+ | stddev(position) | kind | +--------------------+-------------+ | 1.920286436967152 | Galaxy | | 1.4142135623730951 | Planet | | 1.118033988749895 | Star System | +--------------------+-------------+ SELECT 3 rows in set (... sec) .. CAUTION:: Due to java double precision arithmetic it is possible that any two executions of the aggregate function on the same data produce slightly differing results. .. _aggregation-string-agg: ``string_agg(column, delimiter)`` --------------------------------- The ``string_agg`` aggregate function concatenates the input values into a string, where each value is separated by a delimiter. If all input values are null, null is returned as a result. :: cr> select string_agg(col1, ', ') from (values('a'), ('b'), ('c')) as t; +------------------------+ | string_agg(col1, ', ') | +------------------------+ | a, b, c | +------------------------+ SELECT 1 row in set (... sec) .. SEEALSO:: :ref:`aggregation-array-agg` .. _aggregation-percentile: ``percentile(column, {fraction | fractions} [, compression])`` -------------------------------------------------------------- The ``percentile`` aggregate function computes a `Percentile`_ over numeric non-null values in a column. Percentiles show the point at which a certain percentage of observed values occur. For example, the 98th percentile is the value which is greater than 98% of the observed values. The result is defined and computed as an interpolated weighted average. According to that it allows the median of the input data to be defined conveniently as the 50th percentile. The :ref:`function ` expects a single fraction or an array of fractions and a column name. Independent of the input column data type the result of ``percentile`` always returns a ``double precision``. If the value at the specified column is ``null`` the row is ignored. Fractions must be double precision values between 0 and 1. When supplied a single fraction, the function will return a single value corresponding to the percentile of the specified fraction:: cr> select percentile(position, 0.95), kind from locations ... group by kind order by kind; +----------------------------+-------------+ | percentile(position, 0.95) | kind | +----------------------------+-------------+ | 6.0 | Galaxy | | 5.0 | Planet | | 4.0 | Star System | +----------------------------+-------------+ SELECT 3 rows in set (... sec) When supplied an array of fractions, the function will return an array of values corresponding to the percentile of each fraction specified:: cr> select percentile(position, [0.0013, 0.9987]) as perc from locations; +------------+ | perc | +------------+ | [1.0, 6.0] | +------------+ SELECT 1 row in set (... sec) When a query with ``percentile`` function won't match any rows then a null result is returned. To be able to calculate percentiles over a huge amount of data and to scale out CrateDB calculates approximate instead of accurate percentiles. The algorithm used by the percentile metric is called `TDigest`_. The accuracy/size trade-off of the algorithm is defined by a single ``compression`` parameter which has a default value of ``100.0``, but can be defined by passing in an optional 3rd ``double`` value argument as the ``compression``. However, there are a few guidelines to keep in mind in this implementation: - Extreme percentiles (e.g. 99%) are more accurate. - For small sets, percentiles are highly accurate. - It is difficult to generalize the exact level of accuracy, as it depends on your data distribution and volume of data being aggregated. - The ``compression`` parameter is a trade-off between accuracy and memory usage. A higher value will result in more accurate percentiles but will consume more memory. .. _aggregation-sum: ``sum(column)`` --------------- Returns the sum of a set of numeric input values that are not ``NULL``. Depending on the argument type a suitable return type is chosen. For ``interval`` argument types the return type is ``interval``. For ``real`` and ``double precision`` argument types the return type is equal to the argument type. For ``byte``, ``smallint``, ``integer`` and ``bigint`` the return type changes to ``bigint``. If the range of ``bigint`` values (-2^64 to 2^64-1) gets exceeded an ``ArithmeticException`` will be raised. :: cr> select sum(position), kind from locations ... group by kind order by sum(position) asc; +---------------+-------------+ | sum(position) | kind | +---------------+-------------+ | 10 | Star System | | 13 | Galaxy | | 15 | Planet | +---------------+-------------+ SELECT 3 rows in set (... sec) :: cr> select sum(position) as position_sum from locations; +--------------+ | position_sum | +--------------+ | 38 | +--------------+ SELECT 1 row in set (... sec) :: cr> select sum(name), kind from locations group by kind order by sum(name) desc; SQLParseException[Cannot cast value `Aldebaran` to type `byte`] If the ``sum`` aggregation on a numeric data type with the fixed length can potentially exceed its range it is possible to handle the overflow by casting the :ref:`function ` argument to the :ref:`numeric type ` with an arbitrary precision. .. Hidden: create user visits table cr> CREATE TABLE uservisits (id integer, count bigint) ... CLUSTERED INTO 1 SHARDS ... WITH (number_of_replicas = 0); CREATE OK, 1 row affected (... sec) .. Hidden: insert into uservisits table cr> INSERT INTO uservisits VALUES (1, 9223372036854775806), (2, 10); INSERT OK, 2 rows affected (... sec) .. Hidden: refresh uservisits table cr> REFRESH TABLE uservisits; REFRESH OK, 1 row affected (... sec) The ``sum`` aggregation on the ``bigint`` column will result in an overflow in the following aggregation query:: cr> SELECT sum(count) ... FROM uservisits; ArithmeticException[long overflow] To address the overflow of the sum aggregation on the given field, we cast the aggregation column to the ``numeric`` data type:: cr> SELECT sum(count::numeric) ... FROM uservisits; +-----------------------------+ | sum(cast(count AS NUMERIC)) | +-----------------------------+ | 9223372036854775816 | +-----------------------------+ SELECT 1 row in set (... sec) .. Hidden: refresh uservisits table cr> DROP TABLE uservisits; DROP OK, 1 row affected (... sec) .. _aggregation-variance: ``variance(column)`` -------------------- The ``variance`` aggregate function computes the `Variance`_ of the set of non-null values in a column. It is a measure about how far a set of numbers is spread. A variance of ``0.0`` indicates that all values are the same. ``variance`` is defined on all numeric types and on timestamp. It returns a ``double precision`` value. If all values were null or we got no value at all ``NULL`` is returned. Example:: cr> select variance(position), kind from locations ... group by kind order by kind desc; +--------------------+-------------+ | variance(position) | kind | +--------------------+-------------+ | 1.25 | Star System | | 2.0 | Planet | | 3.6875 | Galaxy | +--------------------+-------------+ SELECT 3 rows in set (... sec) .. CAUTION:: Due to java double precision arithmetic it is possible that any two executions of the aggregate function on the same data produce slightly differing results. .. _aggregation-topk: ``topk(column, [k], [max_capacity])`` ------------------------------------- The ``topk`` aggregate function computes ``k`` most frequent values. The result is an ``OBJECT`` in the following format:: { "frequencies": [ { "estimate": , "item": , "lower_bound": , "upper_bound": " }, ... ], "maximum_error": } The ``frequencies`` list is ordered by the estimated frequency, with the most common items listed first. ``k`` defaults to 8 and can't exceed 5000. The ``max_capacity`` parameter is optional and describes the maximum number of tracked items and must be in the power of 2 and defaults to 8192. Example:: cr> select topk(country, 3) from sys.summits; +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | topk(country, 3) | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"frequencies": [{"estimate": 436, "item": "IT", "lower_bound": 436, "upper_bound": 436}, {"estimate": 401, "item": "AT", "lower_bound": 401, "upper_bound": 401}, {"estimate": 320, "item": "CH", "lower_bound": 320, "upper_bound": 320}], "maximum_error": 0} | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ SELECT 1 row in set (... sec) Internally a `Frequency Sketch`_ is used to track the frequencies of the most common values. Higher values in ``max_capacity`` provide better accuracy at the cost of increased memory usage. If less different items than 75 % of the ``max_capacity`` are processed the frequencies of the result are exact, otherwise they will be an approximation. The result contains all values with their frequencies above the error threshold and may also include false positives. The error threshold indicates the minimum frequency which can be detected reliably and is defined as followed:: M = max_capacity, always a power of 2 N = Total counts of items e = Epsilon = 3.5/M (minimum detectable frequency) error threshold = (N < 0.75 * M)? 0 : e * N. The following table is an extract of the `Error Threshold Table`_ and shows the error threshold in relation to the ``max_capacity`` and the number of processed items. A threshold of 0 indicates that the frequencies are exact. .. list-table:: Error Threshold :widths: 20 20 20 20 20 20 20 20 :header-rows: 1 :stub-columns: 1 * - max_capacity vs. items - 8192 - 16384 - 32768 - 65536 - 131072 - 262144 - 524288 * - 10000 - 4 - 0 - 0 - 0 - 0 - 0 - 0 * - 100000 - 43 - 21 - 11 - 5 - 3 - 0 - 0 * - 1000000 - 427 - 214 - 107 - 53 - 27 - 13 - 7 * - 10000000 - 4272 - 2136 - 1068 - 534 - 267 - 134 - 67 * - 100000000 - 42725 - 21362 - 10681 - 5341 - 2670 - 1335 - 668 * - 1000000000 - 427246 - 213623 - 106812 - 53406 - 26703 - 13351 - 6676 The error threshold shows which ranges of frequencies can be tracked depending on the number of items and capacity. E.g. Processing 10,000 items with the ``max_capacity`` of 8192 indicates a error threshold of 4. Therefore all items with frequencies greater 4 will be included. Some items with frequencies below the threshold 4 may also appear in the result. .. _aggregation-limitations: Limitations =========== - ``DISTINCT`` is not supported with aggregations on :ref:`sql_joins`. - Aggregate functions can only be applied to columns with a :ref:`plain index `, which is the default for all :ref:`primitive type ` columns. .. _Aggregate function: https://en.wikipedia.org/wiki/Aggregate_function .. _Geometric Mean: https://en.wikipedia.org/wiki/Geometric_mean .. _HyperLogLog++: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/40671.pdf .. _Percentile: https://en.wikipedia.org/wiki/Percentile .. _Standard Deviation: https://en.wikipedia.org/wiki/Standard_deviation .. _TDigest: https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf .. _Variance: https://en.wikipedia.org/wiki/Variance .. _Frequency Sketch: https://datasketches.apache.org/docs/Frequency/FrequencySketches.html .. _Error Threshold Table: https://datasketches.apache.org/docs/Frequency/FrequentItemsErrorTable.html.. highlight:: psql .. _table-functions: =============== Table functions =============== Table functions are :ref:`functions ` that produce a set of rows. They can be used in place of a relation in the ``FROM`` clause. If used within the select list, the table functions will be :ref:`evaluated ` per row of the relations in the ``FROM`` clause, generating one or more rows which are appended to the result set. If multiple table functions with different amounts of rows are used, ``NULL`` values will be returned for the functions that are exhausted. For example:: cr> select unnest([1, 2, 3]), unnest([1, 2]); +-------------------+----------------+ | unnest([1, 2, 3]) | unnest([1, 2]) | +-------------------+----------------+ | 1 | 1 | | 2 | 2 | | 3 | NULL | +-------------------+----------------+ SELECT 3 rows in set (... sec) .. note:: Table functions in the select list are executed after aggregations. So aggregations can be used as arguments to table functions, but the other way around is not allowed, unless sub queries are utilized. For example:: (SELECT aggregate_func(col) FROM (SELECT table_func(...) AS col) ...) .. rubric:: Table of contents .. contents:: :local: .. _table-functions-scalar: Scalar functions ================ A :ref:`scalar function `, when used in the ``FROM`` clause in place of a relation, will result in a table of one row and one column, containing the :ref:`scalar value ` returned from the function. :: cr> SELECT * FROM abs(-5), initcap('hello world'); +-----+-------------+ | abs | initcap | +-----+-------------+ | 5 | Hello World | +-----+-------------+ SELECT 1 row in set (... sec) ``empty_row( )`` ================ empty_row doesn't take any argument and produces a table with an empty row and no column. :: cr> select * from empty_row(); SELECT OK, 1 row affected (... sec) .. _unnest: ``unnest( array [ array , ] )`` =============================== unnest takes any number of array parameters and produces a table where each provided array argument results in a column. The columns are named ``colN`` where ``N`` is a number starting at 1. :: cr> select * from unnest([1, 2, 3], ['Arthur', 'Trillian', 'Marvin']); +------+----------+ | col1 | col2 | +------+----------+ | 1 | Arthur | | 2 | Trillian | | 3 | Marvin | +------+----------+ SELECT 3 rows in set (... sec) .. _table-functions-generate-series: ``pg_catalog.generate_series(start, stop, [step])`` =================================================== Generate a series of values from inclusive start to inclusive stop with ``step`` increments. The argument can be ``integer`` or ``bigint``, in which case ``step`` is optional and defaults to ``1``. ``start`` and ``stop`` can also be of type ``timestamp with time zone`` or ``timestamp without time zone`` in which case ``step`` is required and must be of type ``interval``. The return value always matches the ``start`` / ``stop`` types. :: cr> SELECT * FROM generate_series(1, 4); +-----------------+ | generate_series | +-----------------+ | 1 | | 2 | | 3 | | 4 | +-----------------+ SELECT 4 rows in set (... sec) :: cr> SELECT ... x, ... date_format('%Y-%m-%d, %H:%i', x) ... FROM generate_series('2019-01-01 00:00'::timestamp, '2019-01-04 00:00'::timestamp, '30 hours'::interval) AS t(x); +---------------+-----------------------------------+ | x | date_format('%Y-%m-%d, %H:%i', x) | +---------------+-----------------------------------+ | 1546300800000 | 2019-01-01, 00:00 | | 1546408800000 | 2019-01-02, 06:00 | | 1546516800000 | 2019-01-03, 12:00 | +---------------+-----------------------------------+ SELECT 3 rows in set (... sec) .. _table-functions-generate-subscripts: ``pg_catalog.generate_subscripts(array, dim, [reverse])`` ========================================================= Generate the subscripts for the specified dimension ``dim`` of the given ``array``. Zero rows are returned for arrays that do not have the requested dimension, or for ``NULL`` arrays (but valid subscripts are returned for ``NULL`` array elements). If ``reverse`` is ``true`` the subscripts will be returned in reverse order. This example takes a one dimensional array of four elements, where elements at positions 1 and 3 are ``NULL``: :: cr> SELECT generate_subscripts([NULL, 1, NULL, 2], 1) AS s; +---+ | s | +---+ | 1 | | 2 | | 3 | | 4 | +---+ SELECT 4 rows in set (... sec) This example returns the reversed list of subscripts for the same array: :: cr> SELECT generate_subscripts([NULL, 1, NULL, 2], 1, true) AS s; +---+ | s | +---+ | 4 | | 3 | | 2 | | 1 | +---+ SELECT 4 rows in set (... sec) This example works on an array of three dimensions. Each of the elements within a given level must be either ``NULL``, or an array of the same size as the other arrays within the same level. :: cr> select generate_subscripts([[[1],[2]], [[3],[4]], [[4],[5]]], 2) as s; +---+ | s | +---+ | 1 | | 2 | +---+ SELECT 2 rows in set (... sec) .. _table-functions-regexp-matches: ``regexp_matches(source, pattern [, flags])`` ============================================= Uses the :ref:`regular expression ` ``pattern`` to match against the ``source`` string. The result rows have one column: .. list-table:: :header-rows: 1 * - Column name - Description * - groups - ``array(text)`` If ``pattern`` matches ``source``, an array of the matched regular expression groups is returned. If no regular expression group was used, the whole pattern is used as a group. A regular expression group is formed by a subexpression that is surrounded by parentheses. The position of a group is determined by the position of its opening parenthesis. For example when matching the pattern ``\b([A-Z])`` a match for the subexpression ``([A-Z])`` would create group No. 1. If you want to group items with parentheses, but without grouping, use ``(?...)``. For example matching the regular expression ``([Aa](.+)z)`` against ``alcatraz``, results in these groups: - group 1: ``alcatraz`` (from first to last parenthesis or whole pattern) - group 2: ``lcatra`` (beginning at second parenthesis) The ``regexp_matches`` :ref:`function ` will return all groups as a ``text`` array:: cr> select regexp_matches('alcatraz', '(a(.+)z)') as matched; +------------------------+ | matched | +------------------------+ | ["alcatraz", "lcatra"] | +------------------------+ SELECT 1 row in set (... sec) :: cr> select regexp_matches('alcatraz', 'traz') as matched; +----------+ | matched | +----------+ | ["traz"] | +----------+ SELECT 1 row in set (... sec) Through array element access functionality, a group can be selected directly. See :ref:`sql_dql_object_arrays` for details. :: cr> select regexp_matches('alcatraz', '(a(.+)z)')[2] as second_group; +--------------+ | second_group | +--------------+ | lcatra | +--------------+ SELECT 1 row in set (... sec) .. _table-functions-regexp-matches-flags: Flags ..... This function takes a number of flags as optional third parameter. These flags are given as a string containing any of the characters listed below. Order does not matter. +-------+---------------------------------------------------------------------+ | Flag | Description | +=======+=====================================================================+ | ``i`` | enable case insensitive matching | +-------+---------------------------------------------------------------------+ | ``u`` | enable unicode case folding when used together with ``i`` | +-------+---------------------------------------------------------------------+ | ``U`` | enable unicode support for character classes like ``\W`` | +-------+---------------------------------------------------------------------+ | ``s`` | make ``.`` match line terminators, too | +-------+---------------------------------------------------------------------+ | ``m`` | make ``^`` and ``$`` match on the beginning or end of a line | | | too. | +-------+---------------------------------------------------------------------+ | ``x`` | permit whitespace and line comments starting with ``#`` | +-------+---------------------------------------------------------------------+ | ``d`` | only ``\n`` is considered a line-terminator when using ``^``, ``$`` | | | and ``.`` | +-------+---------------------------------------------------------------------+ | ``g`` | keep matching until the end of ``source``, instead of stopping at | | | the first match. | +-------+---------------------------------------------------------------------+ Examples ........ In this example the ``pattern`` does not match anything in the ``source`` and the result is an empty table: :: cr> select regexp_matches('foobar', '^(a(.+)z)$') as matched; +---------+ | matched | +---------+ +---------+ SELECT 0 rows in set (... sec) In this example we find the term that follows two digits: :: cr> select regexp_matches('99 bottles of beer on the wall', '\d{2}\s(\w+).*', 'ixU') ... as matched; +-------------+ | matched | +-------------+ | ["bottles"] | +-------------+ SELECT 1 row in set (... sec) This example shows the use of flag ``g``, splitting ``source`` into a set of arrays, each containing two entries: :: cr> select regexp_matches('#abc #def #ghi #jkl', '(#[^\s]*) (#[^\s]*)', 'g') as matched; +------------------+ | matched | +------------------+ | ["#abc", "#def"] | | ["#ghi", "#jkl"] | +------------------+ SELECT 2 rows in set (... sec) .. _pg_catalog.pg_get_keywords: ``pg_catalog.pg_get_keywords()`` ================================ Returns a list of SQL keywords and their categories. The result rows have three columns: .. list-table:: :header-rows: 1 * - Column name - Description * - ``word`` - The SQL keyword * - ``catcode`` - Code for the category (`R` for reserved keywords, `U` for unreserved keywords) * - ``catdesc`` - The description of the category :: cr> SELECT * FROM pg_catalog.pg_get_keywords() ORDER BY 1 LIMIT 4; +----------+---------+------------+ | word | catcode | catdesc | +----------+---------+------------+ | absolute | U | unreserved | | add | R | reserved | | alias | U | unreserved | | all | R | reserved | +----------+---------+------------+ SELECT 4 rows in set (... sec) .. _information_schema._pg_expandarray: ``information_schema._pg_expandarray(array)`` ============================================= Takes an array and returns a set of value and an index into the array. .. list-table:: :header-rows: 1 * - Column name - Description * - x - Value within the array * - n - Index of the value within the array :: cr> SELECT information_schema._pg_expandarray(ARRAY['a', 'b']) AS result; +----------+ | result | +----------+ | ["a", 1] | | ["b", 2] | +----------+ SELECT 2 rows in set (... sec) :: cr> SELECT * from information_schema._pg_expandarray(ARRAY['a', 'b']); +---+---+ | x | n | +---+---+ | a | 1 | | b | 2 | +---+---+ SELECT 2 rows in set (... sec).. highlight:: psql .. _window-functions: ================ Window functions ================ Window functions are :ref:`functions ` which perform a computation across a set of rows which are related to the current row. This is comparable to :ref:`aggregation functions `, but window functions do not cause multiple rows to be grouped into a single row. .. rubric:: Table of contents .. contents:: :local: .. _window-function-call: Window function call ==================== .. _window-call-synopsis: Synopsis -------- The synopsis of a window function call is one of the following :: function_name ( { * | [ expression [, expression ... ] ] } ) [ FILTER ( WHERE condition ) ] [ { RESPECT | IGNORE } NULLS ] over_clause where ``function_name`` is a name of a :ref:`general-purpose window ` or :ref:`aggregate function ` and ``expression`` is a column reference, :ref:`scalar function ` or literal. If ``FILTER`` is specified, then only the rows that met the :ref:`WHERE ` condition are supplied to the window function. Only window functions that are :ref:`aggregates ` accept the ``FILTER`` clause. If ``IGNORE NULLS`` option is specified, then the null values are excluded from the window function executions. The window functions that support this option are: :ref:`window-functions-lead`, :ref:`window-functions-lag`, :ref:`window-functions-first-value`, :ref:`window-functions-last-value`, and :ref:`window-functions-nth-value`. If a function supports this option and it is not specified, then ``RESPECT NULLS`` is set by default. The :ref:`window-definition-over` clause is what declares a function to be a window function. The window function call that uses a ``wildcard`` instead of an ``expression`` as a function argument is supported only by the ``count(*)`` aggregate function. .. _window-definition: Window definition ================= .. _window-definition-over: OVER ---- .. _window-definition-over-synopsis: Synopsis ........ :: OVER { window_name | ( [ window_definition ] ) } where ``window_definition`` has the syntax :: window_definition: [ window_name ] [ PARTITION BY expression [, ...] ] [ ORDER BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ] [ { RANGE | ROWS } BETWEEN frame_start AND frame_end ] The ``window_name`` refers to ``window_definition`` defined in the :ref:`WINDOW ` clause. The ``frame_start`` and ``frame_end`` can be one of :: UNBOUNDED PRECEDING offset PRECEDING CURRENT ROW offset FOLLOWING UNBOUNDED FOLLOWING The default frame definition is ``RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW``. If ``frame_end`` is omitted it defaults to ``CURRENT ROW``. ``frame_start`` cannot be ``FOLLOWING`` or ``UNBOUNDED FOLLOWING`` and ``frame_end`` cannot be ``PRECEDING`` or ``UNBOUNDED PRECEDING``. In ``RANGE`` mode if the ``frame_start`` is ``CURRENT ROW`` the frame starts with the current row's first peer (a row that the window's ``ORDER BY`` :ref:`expression ` sorts as equal to the current row), while a ``frame_end`` of ``CURRENT ROW`` means the frame will end with the current's row last peer row. In ``ROWS`` mode ``CURRENT_ROW`` means the current row. The ``offset PRECEDING`` and ``offset FOLLOWING`` options vary in meaning depending on the frame mode. In ``ROWS`` mode, the ``offset`` is an integer indicating that the frame start or end is offsetted by that many rows before or after the current row. In ``RANGE`` mode, the use of a custom ``offset`` option requires that there is exactly one ``ORDER BY`` column in the window definition. The frame contains those rows whose ordering column value is no more than ``offset`` minus (for ``PRECEDING``) or plus (for ``FOLLOWING``) the current row's ordering column value. Because the value of ``offset`` is subtracted/added to the values of the ordering column, only type combinations that support addition/subtraction operations are allowed. For instance, when the ordering column is of type :ref:`timestamp `, the ``offset`` expression can be an :ref:`interval `. The :ref:`window-definition-over` clause defines the ``window`` containing the appropriate rows which will take part in the ``window function`` computation. An empty :ref:`window-definition-over` clause defines a ``window`` containing all the rows in the result set. Example:: cr> SELECT dept_id, COUNT(*) OVER() AS cnt FROM employees ORDER BY 1, 2; +---------+-----+ | dept_id | cnt | +---------+-----+ | 4001 | 18 | | 4001 | 18 | | 4001 | 18 | | 4002 | 18 | | 4002 | 18 | | 4002 | 18 | | 4002 | 18 | | 4003 | 18 | | 4003 | 18 | | 4003 | 18 | | 4003 | 18 | | 4003 | 18 | | 4004 | 18 | | 4004 | 18 | | 4004 | 18 | | 4006 | 18 | | 4006 | 18 | | 4006 | 18 | +---------+-----+ SELECT 18 rows in set (... sec) The ``PARTITION BY`` clause groups the rows within a window into partitions which are processed separately by the window function, each partition in turn becoming a window. If ``PARTITION BY`` is not specified, all the rows are considered a single partition. Example:: cr> SELECT dept_id, ROW_NUMBER() OVER(PARTITION BY dept_id) AS row_num ... FROM employees ORDER BY 1, 2; +---------+---------+ | dept_id | row_num | +---------+---------+ | 4001 | 1 | | 4001 | 2 | | 4001 | 3 | | 4002 | 1 | | 4002 | 2 | | 4002 | 3 | | 4002 | 4 | | 4003 | 1 | | 4003 | 2 | | 4003 | 3 | | 4003 | 4 | | 4003 | 5 | | 4004 | 1 | | 4004 | 2 | | 4004 | 3 | | 4006 | 1 | | 4006 | 2 | | 4006 | 3 | +---------+---------+ SELECT 18 rows in set (... sec) If ``ORDER BY`` is supplied the ``window`` definition consists of a range of rows starting with the first row in the ``partition`` and ending with the current row, plus any subsequent rows that are equal to the current row, which are the current row's ``peers``. Example:: cr> SELECT ... dept_id, ... sex, ... COUNT(*) OVER(PARTITION BY dept_id ORDER BY sex) AS cnt ... FROM employees ... ORDER BY 1, 2, 3 +---------+-----+-----+ | dept_id | sex | cnt | +---------+-----+-----+ | 4001 | M | 3 | | 4001 | M | 3 | | 4001 | M | 3 | | 4002 | F | 1 | | 4002 | M | 4 | | 4002 | M | 4 | | 4002 | M | 4 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4004 | F | 1 | | 4004 | M | 3 | | 4004 | M | 3 | | 4006 | F | 1 | | 4006 | M | 3 | | 4006 | M | 3 | +---------+-----+-----+ SELECT 18 rows in set (... sec) .. NOTE:: Taking into account the ``peers`` concept mentioned above, for an empty :ref:`window-definition-over` clause all the rows in the result set are ``peers``. .. NOTE:: :ref:`Aggregation functions ` will be treated as ``window functions`` when used in conjunction with the :ref:`window-definition-over` clause. .. NOTE:: Window definitions order or partitioned by an array column type are currently not supported. In the ``UNBOUNDED FOLLOWING`` case the ``window`` for each row starts with each row and ends with the last row in the current ``partition``. If the ``current row`` has ``peers`` the ``window`` will include (or start with) all the ``current row`` peers and end at the upper bound of the ``partition``. Example:: cr> SELECT ... dept_id, ... sex, ... COUNT(*) OVER( ... PARTITION BY dept_id ... ORDER BY ... sex RANGE BETWEEN CURRENT ROW ... AND UNBOUNDED FOLLOWING ... ) partitionByDeptOrderBySex ... FROM employees ... ORDER BY 1, 2, 3 +---------+-----+---------------------------+ | dept_id | sex | partitionbydeptorderbysex | +---------+-----+---------------------------+ | 4001 | M | 3 | | 4001 | M | 3 | | 4001 | M | 3 | | 4002 | F | 4 | | 4002 | M | 3 | | 4002 | M | 3 | | 4002 | M | 3 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4003 | M | 5 | | 4004 | F | 3 | | 4004 | M | 2 | | 4004 | M | 2 | | 4006 | F | 3 | | 4006 | M | 2 | | 4006 | M | 2 | +---------+-----+---------------------------+ SELECT 18 rows in set (... sec) .. _window-definition-named-windows: Named windows ------------- It is possible to define a list of named window definitions that can be referenced in :ref:`window-definition-over` clauses. To do this, use the :ref:`sql-select-window` clause in the :ref:`sql-select` clause. Named windows are particularly useful when the same window definition could be used in multiple :ref:`window-definition-over` clauses. For instance :: cr> SELECT ... x, ... FIRST_VALUE(x) OVER (w) AS "first", ... LAST_VALUE(x) OVER (w) AS "last" ... FROM (VALUES (1), (2), (3), (4)) AS t(x) ... WINDOW w AS (ORDER BY x) +---+-------+------+ | x | first | last | +---+-------+------+ | 1 | 1 | 1 | | 2 | 1 | 2 | | 3 | 1 | 3 | | 4 | 1 | 4 | +---+-------+------+ SELECT 4 rows in set (... sec) If a ``window_name`` is specified in the window definition of the :ref:`window-definition-over` clause, then there must be a named window entry that matches the ``window_name`` in the window definition list of the :ref:`sql-select-window` clause. If the :ref:`window-definition-over` clause has its own non-empty window definition and references a window definition from the :ref:`sql-select-window` clause, then it can only add clauses from the referenced window, but not overwrite them. :: cr> SELECT ... x, ... LAST_VALUE(x) OVER (w ORDER BY x) AS y ... FROM (VALUES ... (1, 1), ... (2, 1), ... (3, 2), ... (4, 2) ) AS t(x, y) ... WINDOW w AS (PARTITION BY y) +---+---+ | x | y | +---+---+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | +---+---+ SELECT 4 rows in set (... sec) Otherwise, an attempt to override the clauses of the referenced window by the window definition of the :ref:`window-definition-over` clause will result in failure. :: cr> SELECT ... FIRST_VALUE(x) OVER (w ORDER BY x) ... FROM (VALUES(1), (2), (3), (4)) as t(x) ... WINDOW w AS (ORDER BY x) SQLParseException[Cannot override ORDER BY clause of window w] It is not possible to define the ``PARTITION BY`` clause in the window definition of the :ref:`window-definition-over` clause if it references a window definition from the :ref:`sql-select-window` clause. The window definitions in the :ref:`sql-select-window` clause cannot define its own window frames, if they are referenced by non-empty window definitions of the :ref:`window-definition-over` clauses. The definition of the named window can itself begin with a ``window_name``. In this case all the elements of interconnected named windows will be copied to the window definition of the :ref:`window-definition-over` clause if it references the named window definition that has subsequent window references. The window definitions in the ``WINDOW`` clause permits only backward references. :: cr> SELECT ... x, ... ROW_NUMBER() OVER (w) AS y ... FROM (VALUES ... (1, 1), ... (3, 2), ... (2, 1)) AS t (x, y) ... WINDOW p AS (PARTITION BY y), ... w AS (p ORDER BY x) +---+---+ | x | y | +---+---+ | 1 | 1 | | 2 | 2 | | 3 | 1 | +---+---+ SELECT 3 rows in set (... sec) .. _window-functions-general-purpose: General-purpose window functions ================================ ``row_number()`` ---------------- Returns the number of the current row within its window. Example:: cr> SELECT ... col1, ... ROW_NUMBER() OVER(ORDER BY col1) as row_num ... FROM (VALUES('x'), ('y'), ('z')) AS t; +------+---------+ | col1 | row_num | +------+---------+ | x | 1 | | y | 2 | | z | 3 | +------+---------+ SELECT 3 rows in set (... sec) .. _window-functions-first-value: ``first_value(arg)`` -------------------- Returns the argument value :ref:`evaluated ` at the first row within the window. Its return type is the type of its argument. Example:: cr> SELECT ... col1, ... FIRST_VALUE(col1) OVER (ORDER BY col1) AS value ... FROM (VALUES('x'), ('y'), ('y'), ('z')) AS t; +------+-------+ | col1 | value | +------+-------+ | x | x | | y | x | | y | x | | z | x | +------+-------+ SELECT 4 rows in set (... sec) .. _window-functions-last-value: ``last_value(arg)`` ------------------- Returns the argument value :ref:`evaluated ` at the last row within the window. Its return type is the type of its argument. Example:: cr> SELECT ... col1, ... LAST_VALUE(col1) OVER(ORDER BY col1) AS value ... FROM (VALUES('x'), ('y'), ('y'), ('z')) AS t; +------+-------+ | col1 | value | +------+-------+ | x | x | | y | y | | y | y | | z | z | +------+-------+ SELECT 4 rows in set (... sec) .. _window-functions-nth-value: ``nth_value(arg, number)`` -------------------------- Returns the argument value :ref:`evaluated ` at row that is the nth row within the window. ``NULL`` is returned if the nth row doesn't exist in the window. Its return type is the type of its first argument. Example:: cr> SELECT ... col1, ... NTH_VALUE(col1, 3) OVER(ORDER BY col1) AS val ... FROM (VALUES ('x'), ('y'), ('y'), ('z')) AS t; +------+------+ | col1 | val | +------+------+ | x | NULL | | y | y | | y | y | | z | y | +------+------+ SELECT 4 rows in set (... sec) .. _window-functions-lag: ``lag(arg [, offset [, default] ])`` ------------------------------------ .. _window-functions-lag-synopsis: Synopsis ........ :: lag(argument any [, offset integer [, default any]]) Returns the argument value :ref:`evaluated ` at the row that precedes the current row by the offset within the partition. If there is no such row, the return value is ``default``. If ``offset`` or ``default`` arguments are missing, they default to ``1`` and ``null``, respectively. Both ``offset`` and ``default`` are evaluated with respect to the current row. If ``offset`` is ``0``, then argument value is evaluated for the current row. The ``default`` and ``argument`` data types must match. Example:: cr> SELECT ... dept_id, ... year, ... budget, ... LAG(budget) OVER( ... PARTITION BY dept_id) prev_budget ... FROM (VALUES ... (1, 2017, 45000), ... (1, 2018, 35000), ... (2, 2017, 15000), ... (2, 2018, 65000), ... (2, 2019, 12000)) ... as t (dept_id, year, budget); +---------+------+--------+-------------+ | dept_id | year | budget | prev_budget | +---------+------+--------+-------------+ | 1 | 2017 | 45000 | NULL | | 1 | 2018 | 35000 | 45000 | | 2 | 2017 | 15000 | NULL | | 2 | 2018 | 65000 | 15000 | | 2 | 2019 | 12000 | 65000 | +---------+------+--------+-------------+ SELECT 5 rows in set (... sec) .. _window-functions-lead: ``lead(arg [, offset [, default] ])`` ------------------------------------- .. _window-functions-lead-synopsis: Synopsis ........ :: lead(argument any [, offset integer [, default any]]) The ``lead`` function is the counterpart of the :ref:`lag window function ` as it allows the :ref:`evaluation ` of the argument at rows that follow the current row. ``lead`` returns the argument value evaluated at the row that follows the current row by the offset within the partition. If there is no such row, the return value is ``default``. If ``offset`` or ``default`` arguments are missing, they default to ``1`` or ``null``, respectively. Both ``offset`` and ``default`` are evaluated with respect to the current row. If ``offset`` is ``0``, then argument value is evaluated for the current row. The ``default`` and ``argument`` data types must match. Example:: cr> SELECT ... dept_id, ... year, ... budget, ... LEAD(budget) OVER( ... PARTITION BY dept_id) next_budget ... FROM (VALUES ... (1, 2017, 45000), ... (1, 2018, 35000), ... (2, 2017, 15000), ... (2, 2018, 65000), ... (2, 2019, 12000)) ... as t (dept_id, year, budget); +---------+------+--------+-------------+ | dept_id | year | budget | next_budget | +---------+------+--------+-------------+ | 1 | 2017 | 45000 | 35000 | | 1 | 2018 | 35000 | NULL | | 2 | 2017 | 15000 | 65000 | | 2 | 2018 | 65000 | 12000 | | 2 | 2019 | 12000 | NULL | +---------+------+--------+-------------+ SELECT 5 rows in set (... sec) .. _window-functions-rank: ``rank()`` ---------- .. _window-functions-rank-synopsis: Synopsis ........ :: rank() Returns the rank of every row within a partition of a result set. Within each partition, the rank of the first row is ``1``. Subsequent tied rows are given the same rank, and the potential rank of the next row is incremented. Because of this, ranks may not be sequential. Example:: cr> SELECT ... name, ... department_id, ... salary, ... RANK() OVER (ORDER BY salary desc) as salary_rank ... FROM (VALUES ... ('Bobson Dugnutt', 1, 2000), ... ('Todd Bonzalez', 2, 2500), ... ('Jess Brewer', 1, 2500), ... ('Safwan Buchanan', 1, 1900), ... ('Hal Dodd', 1, 2500), ... ('Gillian Hawes', 2, 2000)) ... as t (name, department_id, salary); +-----------------+---------------+--------+-------------+ | name | department_id | salary | salary_rank | +-----------------+---------------+--------+-------------+ | Todd Bonzalez | 2 | 2500 | 1 | | Jess Brewer | 1 | 2500 | 1 | | Hal Dodd | 1 | 2500 | 1 | | Bobson Dugnutt | 1 | 2000 | 4 | | Gillian Hawes | 2 | 2000 | 4 | | Safwan Buchanan | 1 | 1900 | 6 | +-----------------+---------------+--------+-------------+ SELECT 6 rows in set (... sec) .. _window-functions-dense-rank: ``dense_rank()`` ---------------- .. _window-functions-dense-rank-synopsis: Synopsis ........ :: dense_rank() Returns the rank of every row within a partition of a result set, similar to ``rank``. However, unlike ``rank``, ``dense_rank`` always returns sequential rank values. Within each partition, the rank of the first row is ``1``. Subsequent tied rows are given the same rank. Example:: cr> SELECT ... name, ... department_id, ... salary, ... DENSE_RANK() OVER (ORDER BY salary desc) as salary_rank ... FROM (VALUES ... ('Bobson Dugnutt', 1, 2000), ... ('Todd Bonzalez', 2, 2500), ... ('Jess Brewer', 1, 2500), ... ('Safwan Buchanan', 1, 1900), ... ('Hal Dodd', 1, 2500), ... ('Gillian Hawes', 2, 2000)) ... as t (name, department_id, salary); +-----------------+---------------+--------+-------------+ | name | department_id | salary | salary_rank | +-----------------+---------------+--------+-------------+ | Todd Bonzalez | 2 | 2500 | 1 | | Jess Brewer | 1 | 2500 | 1 | | Hal Dodd | 1 | 2500 | 1 | | Bobson Dugnutt | 1 | 2000 | 2 | | Gillian Hawes | 2 | 2000 | 2 | | Safwan Buchanan | 1 | 1900 | 3 | +-----------------+---------------+--------+-------------+ SELECT 6 rows in set (... sec) .. _window-aggregate-functions: Aggregate window functions ========================== See :ref:`aggregation`... _user-defined-functions: ====================== User-defined functions ====================== .. rubric:: Table of contents .. contents:: :local: .. _udf-create-replace: ``CREATE OR REPLACE`` ===================== CrateDB supports user-defined :ref:`functions `. See :ref:`ref-create-function` for a full syntax description. ``CREATE FUNCTION`` defines a new function:: cr> CREATE FUNCTION my_subtract_function(integer, integer) ... RETURNS integer ... LANGUAGE JAVASCRIPT ... AS 'function my_subtract_function(a, b) { return a - b; }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('my_subtract_function(1::integer, 1::integer)') :: cr> SELECT doc.my_subtract_function(3, 1) AS col; +-----+ | col | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) ``CREATE OR REPLACE FUNCTION`` will either create a new function or replace an existing function definition:: cr> CREATE OR REPLACE FUNCTION log10(bigint) ... RETURNS double precision ... LANGUAGE JAVASCRIPT ... AS 'function log10(a) {return Math.log(a)/Math.log(10); }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('log10(1::bigint)') :: cr> SELECT doc.log10(10) AS col; +-----+ | col | +-----+ | 1.0 | +-----+ SELECT 1 row in set (... sec) It is possible to use named function arguments in the function signature. For example, the ``calculate_distance`` function signature has two ``geo_point`` arguments named ``start`` and ``end``:: cr> CREATE OR REPLACE FUNCTION calculate_distance("start" geo_point, "end" geo_point) ... RETURNS real ... LANGUAGE JAVASCRIPT ... AS 'function calculate_distance(start, end) { ... return Math.sqrt( ... Math.pow(end[0] - start[0], 2), ... Math.pow(end[1] - start[1], 2)); ... }'; CREATE OK, 1 row affected (... sec) .. NOTE:: Argument names are used for query documentation purposes only. You cannot reference arguments by name in the function body. Optionally, a schema-qualified function name can be defined. If you omit the schema, the current session schema is used:: cr> CREATE OR REPLACE FUNCTION my_schema.log10(bigint) ... RETURNS double precision ... LANGUAGE JAVASCRIPT ... AS 'function log10(a) { return Math.log(a)/Math.log(10); }'; CREATE OK, 1 row affected (... sec) .. NOTE:: In order to improve the PostgreSQL server compatibility CrateDB allows the creation of user defined functions against the :ref:`postgres-pg_catalog` schema. However, the creation of user defined functions against the read-only :ref:`system-information` and :ref:`information_schema` schemas is prohibited. .. _udf-supported-types: Supported types =============== Function arguments and return values can be any of the supported :ref:`data types `. The values passed into a function must strictly correspond to the specified argument data types. .. NOTE:: The value returned by the function will be casted to the return type provided in the definition if required. An exception will be thrown if the cast is not successful. .. _udf-overloading: Overloading =========== Within a specific schema, you can overload functions by defining functions with the same name but a different set of arguments:: cr> CREATE FUNCTION my_schema.my_multiply(integer, integer) ... RETURNS integer ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b) { return a * b; }'; CREATE OK, 1 row affected (... sec) This would overload the ``my_multiply`` function with different argument types:: cr> CREATE FUNCTION my_schema.my_multiply(bigint, bigint) ... RETURNS bigint ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b) { return a * b; }'; CREATE OK, 1 row affected (... sec) This would overload the ``my_multiply`` function with more arguments:: cr> CREATE FUNCTION my_schema.my_multiply(bigint, bigint, bigint) ... RETURNS bigint ... LANGUAGE JAVASCRIPT ... AS 'function my_multiply(a, b, c) { return a * b * c; }'; CREATE OK, 1 row affected (... sec) .. CAUTION:: It is considered bad practice to create functions that have the same name as the CrateDB built-in functions. .. NOTE:: If you call a function without a schema name, CrateDB will look it up in the built-in functions first and only then in the user-defined functions available in the :ref:`search_path `. **Therefore a built-in function with the same name as a user-defined function will hide the latter, even if it contains a different set of arguments.** However, such functions can still be called if the schema name is explicitly provided. .. _udf-determinism: Determinism =========== .. CAUTION:: User-defined functions need to be deterministic, meaning that they must always return the same result value when called with the same argument values, because CrateDB might cache the returned values and reuse the value if the function is called multiple times with the same arguments. .. _udf-drop-function: ``DROP FUNCTION`` ================= Functions can be dropped like this:: cr> DROP FUNCTION doc.log10(bigint); DROP OK, 1 row affected (... sec) Adding ``IF EXISTS`` prevents from raising an error if the function doesn't exist:: cr> DROP FUNCTION IF EXISTS doc.log10(integer); DROP OK, 1 row affected (... sec) Optionally, argument names can be specified within the drop statement:: cr> DROP FUNCTION IF EXISTS doc.calculate_distance(start_point geo_point, end_point geo_point); DROP OK, 1 row affected (... sec) Optionally, you can provide a schema:: cr> DROP FUNCTION my_schema.log10(bigint); DROP OK, 1 row affected (... sec) .. _udf-supported-languages: Supported languages =================== Currently, CrateDB only supports JavaScript for user-defined functions. .. _udf-js: JavaScript ---------- The user defined function JavaScript is compatible with the `ECMAScript 2019`_ specification. CrateDB uses the `GraalVM JavaScript`_ engine as a JavaScript (ECMAScript) language execution runtime. The `GraalVM JavaScript`_ engine is a Java application that works on the stock Java Virtual Machines (VMs). The interoperability between Java code (host language) and JavaScript user-defined functions (guest language) is guaranteed by the `GraalVM Polyglot API`_. Please note: CrateDB does not use the GraalVM JIT compiler as optimizing compiler. However, the `stock host Java VM JIT compilers`_ can JIT-compile, optimize, and execute the GraalVM JavaScript codebase to a certain extent. The execution context for guest JavaScript is created with restricted privileges to allow for the safe execution of less trusted guest language code. The guest language application context for each user-defined function is created with default access modifiers, so any access to managed resources is denied. The only exception is the host language interoperability configuration which explicitly allows access to Java lists and arrays. Please refer to `GraalVM Security Guide`_ for more detailed information. Also, even though user-defined functions implemented with ECMA-compliant JavaScript, objects that are normally accessible with a web browser (e.g. ``window``, ``console``, and so on) are not available. .. NOTE:: GraalVM treats objects provided to JavaScript user-defined functions as close as possible to their respective counterparts and therefore by default only a subset of prototype functions are available in user-defined functions. For CrateDB 4.6 and earlier the object prototype was disabled. Please refer to the `GraalVM JavaScript Compatibility FAQ`_ to learn more about the compatibility. .. _udf-js-supported-types: JavaScript supported types .......................... JavaScript functions can handle all CrateDB data types. However, for some return types the function output must correspond to the certain format. If a function requires ``geo_point`` as a return type, then the JavaScript function must return a ``double precision`` array of size 2, ``WKT`` string or ``GeoJson`` object. Here is an example of a JavaScript function returning a ``double array``:: cr> CREATE FUNCTION rotate_point(point geo_point, angle real) ... RETURNS geo_point ... LANGUAGE JAVASCRIPT ... AS 'function rotate_point(point, angle) { ... var cos = Math.cos(angle); ... var sin = Math.sin(angle); ... var x = cos * point[0] - sin * point[1]; ... var y = sin * point[0] + cos * point[1]; ... return [x, y]; ... }'; CREATE OK, 1 row affected (... sec) Below is an example of a JavaScript function returning a ``WKT`` string, which will be cast to ``geo_point``:: cr> CREATE FUNCTION symmetric_point(point geo_point) ... RETURNS geo_point ... LANGUAGE JAVASCRIPT ... AS 'function symmetric_point (point, angle) { ... var x = - point[0], ... y = - point[1]; ... return "POINT (\" + x + \", \" + y +\")"; ... }'; CREATE OK, 1 row affected (... sec) Similarly, if the function specifies the ``geo_shape`` return data type, then the JavaScript function should return a ``GeoJson`` object or ``WKT`` string:: cr> CREATE FUNCTION line("start" array(double precision), "end" array(double precision)) ... RETURNS object ... LANGUAGE JAVASCRIPT ... AS 'function line(start, end) { ... return { "type": "LineString", "coordinates" : [start_point, end_point] }; ... }'; CREATE OK, 1 row affected (... sec) .. NOTE:: If the return value of the JavaScript function is ``undefined``, it is converted to ``NULL``. .. _udf-js-numbers: Working with ``NUMBERS`` ........................ The JavaScript engine interprets numbers as ``java.lang.Double``, ``java.lang.Long``, or ``java.lang.Integer``, depending on the computation performed. In most cases, this is not an issue, since the return type of the JavaScript function will be cast to the return type specified in the ``CREATE FUNCTION`` statement, although cast might result in a loss of precision. However, when you try to cast ``DOUBLE PRECISION`` to ``TIMESTAMP WITH TIME ZONE``, it will be interpreted as UTC seconds and will result in a wrong value:: cr> CREATE FUNCTION utc(bigint, bigint, bigint) ... RETURNS TIMESTAMP WITH TIME ZONE ... LANGUAGE JAVASCRIPT ... AS 'function utc(year, month, day) { ... return Date.UTC(year, month, day, 0, 0, 0); ... }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('utc(1::bigint, 1::bigint, 1::bigint)') :: cr> SELECT date_format(utc(2016,04,6)) as epoque; +------------------------------+ | epoque | +------------------------------+ | 48314-07-22T00:00:00.000000Z | +------------------------------+ SELECT 1 row in set (... sec) .. hide: cr> DROP FUNCTION utc(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) To avoid this behavior, the numeric value should be divided by 1000 before it is returned:: cr> CREATE FUNCTION utc(bigint, bigint, bigint) ... RETURNS TIMESTAMP WITH TIME ZONE ... LANGUAGE JAVASCRIPT ... AS 'function utc(year, month, day) { ... return Date.UTC(year, month, day, 0, 0, 0)/1000; ... }'; CREATE OK, 1 row affected (... sec) .. hide: cr> _wait_for_function('utc(1::bigint, 1::bigint, 1::bigint)') :: cr> SELECT date_format(utc(2016,04,6)) as epoque; +-----------------------------+ | epoque | +-----------------------------+ | 2016-05-06T00:00:00.000000Z | +-----------------------------+ SELECT 1 row in set (... sec) .. hide: cr> DROP FUNCTION my_subtract_function(integer, integer); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(integer, integer); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION my_schema.my_multiply(bigint, bigint); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION rotate_point(point geo_point, angle real); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION symmetric_point(point geo_point); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION line(start_point array(double precision), end_point array(double precision)); DROP OK, 1 row affected (... sec) cr> DROP FUNCTION utc(bigint, bigint, bigint); DROP OK, 1 row affected (... sec) .. _ECMAScript 2019: https://262.ecma-international.org/10.0/index.html .. _GraalVM JavaScript: https://www.graalvm.org/reference-manual/js/ .. _GraalVM JavaScript Compatibility FAQ: https://www.graalvm.org/latest/reference-manual/js/JavaScriptCompatibility/ .. _GraalVM Polyglot API: https://www.graalvm.org/reference-manual/embed-languages/ .. _GraalVM Security Guide: https://www.graalvm.org/security-guide/ .. _stock host Java VM JIT compilers: https://www.graalvm.org/reference-manual/js/RunOnJDK/.. highlight:: psql .. _arithmetic: ==================== Arithmetic operators ==================== Arithmetic :ref:`operators ` perform mathematical operations on numeric values (including timestamps): ======== ========================================================= Operator Description ======== ========================================================= ``+`` Add one number to another ``-`` Subtract the second number from the first ``*`` Multiply the first number with the second ``/`` Divide the first number by the second ``%`` Finds the remainder of division of one number by another ``^`` Finds the exponentiation of one number raised to another ======== ========================================================= .. NOTE:: Operators are evaluated from left to right. Operation with a higher precedence is performed before operation with a lower precedence. Operators have the following precedence (from higher to lower): 1. Parentheses 2. Exponentiation 3. Multiplication and Division 4. Addition and Subtraction Use parentheses if you want to ensure a specific order of evaluation. Here's an example that uses all of the available arithmetic operators:: cr> select ((2 * 4.0 - 2 ^ 3 + 1) / 2) % 3 AS n; +-----+ | n | +-----+ | 0.5 | +-----+ SELECT 1 row in set (... sec) Arithmetic operators always return the data type of the argument with the higher precision. In the case of division, if both arguments are integers, the result will also be an integer with the fractional part truncated:: cr> select 5 / 2 AS a, 5 / 2.0 AS b; +---+-----+ | a | b | +---+-----+ | 2 | 2.5 | +---+-----+ SELECT 1 row in set (... sec) .. NOTE:: The same restrictions that apply to :ref:`scalar functions ` also apply to arithmetic operators... highlight:: psql .. _bit-operators: ============= Bit operators ============= Bit :ref:`operators ` perform bitwise operations on numeric integral values and :ref:`bit ` strings: ======== ======================== Operator Description ======== ======================== ``&`` Bitwise AND of operands. ``|`` Bitwise OR of operands. ``#`` Bitwise XOR of operands. ======== ======================== Here's an example that uses all of the available bit operators:: cr> select 1 & 2 | 3 # 4 AS n; +---+ | n | +---+ | 7 | +---+ SELECT 1 row in set (... sec) And an example with bit strings:: cr> select B'101' # B'011' AS n; +--------+ | n | +--------+ | B'110' | +--------+ SELECT 1 row in set (... sec) When applied to numeric operands, bit operators always return the data type of the argument with the higher precision. If at least one operand is ``NULL``, bit operators return ``NULL``. When applied to ``BIT`` strings, operands must have equal length. .. NOTE:: Bit operators have the same precedence and evaluated from left to right. Use parentheses if you want to ensure a specific order of evaluation... highlight:: psql .. _comparison-operators: ==================== Comparison operators ==================== A comparison :ref:`operator ` tests the relationship between two values and returns a corresponding value of ``true``, ``false``, or ``NULL``. .. rubric:: Table of contents .. contents:: :local: .. _comparison-operators-basic: Basic operators =============== For simple :ref:`data types `, the following basic operators can be used: ======== ========================== Operator Description ======== ========================== ``<`` Less than -------- -------------------------- ``>`` Greater than -------- -------------------------- ``<=`` Less than or equal to -------- -------------------------- ``>=`` Greater than or equal to -------- -------------------------- ``=`` Equal -------- -------------------------- ``<>`` Not equal -------- -------------------------- ``!=`` Not equal (same as ``<>``) ======== ========================== When comparing strings, a `lexicographical comparison`_ is performed:: cr> select name from locations where name > 'Argabuthon' order by name; +------------------------------------+ | name | +------------------------------------+ | Arkintoofle Minor | | Bartledan | | Galactic Sector QQ7 Active J Gamma | | North West Ripple | | Outer Eastern Rim | +------------------------------------+ SELECT 5 rows in set (... sec) When comparing dates, `ISO date formats`_ can be used:: cr> select date, position from locations where date <= '1979-10-12' and ... position < 3 order by position; +--------------+----------+ | date | position | +--------------+----------+ | 308534400000 | 1 | | 308534400000 | 2 | +--------------+----------+ SELECT 2 rows in set (... sec) When comparing Geo Shapes, `topological comparison`_ is used. Topological equality means that the geometries have the same dimension, and their point-sets occupy the same space. This means that the order of vertices may be different in topologically equal geometries:: cr> SELECT 'POLYGON (( 0 0, 1 0, 1 1, 0 1, 0 0))'::GEO_SHAPE = 'POLYGON (( 1 0, 1 1, 0 1, 0 0, 1 0))'::GEO_SHAPE as res; +------+ | res | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) Geometry collections, containing only linestrings, points or polygons are normalized to MultiLineString, MultiPoint and MultiPolygon. Hence, geometry collection of points is equal to a MultiPoint with the same points set:: cr> select 'MULTIPOINT ((10 40), (40 30), (20 20))'::GEO_SHAPE = 'GEOMETRYCOLLECTION (POINT (10 40), POINT(40 30), POINT(20 20))'::GEO_SHAPE; +------+ | true | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) .. TIP:: Comparison operators are commonly used to filter rows (e.g., in the :ref:`WHERE ` and :ref:`HAVING ` clauses of a :ref:`SELECT ` statement). However, basic comparison operators can be used as :ref:`value expressions ` in any context. For example:: cr> SELECT 1 < 10 as my_column; +-----------+ | my_column | +-----------+ | TRUE | +-----------+ SELECT 1 row in set (... sec) .. _comparison-operators-where: ``WHERE`` clause operators ========================== Within a :ref:`sql_dql_where_clause`, the following operators can also be used: ================================= =================================================== Operator Description ================================= =================================================== ``~`` , ``~*`` , ``!~`` , ``!~*`` See :ref:`sql_dql_regexp` --------------------------------- --------------------------------------------------- :ref:`sql_dql_like` Matches a part of the given value --------------------------------- --------------------------------------------------- :ref:`sql_dql_not` Negates a condition --------------------------------- --------------------------------------------------- :ref:`sql_dql_is_null` Matches a null value --------------------------------- --------------------------------------------------- :ref:`sql_dql_is_not_null` Matches a non-null value --------------------------------- --------------------------------------------------- ``ip << range`` True if IP is within the given IP range (using `CIDR notation`_) --------------------------------- --------------------------------------------------- ``x BETWEEN y AND z`` Shortcut for ``x >= y AND x <= z`` ================================= =================================================== .. SEEALSO:: - :ref:`sql_array_comparisons` - :ref:`sql_subquery_expressions` .. _CIDR notation: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_blocks .. _ISO date formats: http://joda-time.sourceforge.net/api-release/org/joda/time/format/ISODateTimeFormat.html#dateOptionalTimeParser%28%29 .. _lexicographical comparison: https://lucene.apache.org/core/6_6_0/core/org/apache/lucene/search/TermRangeQuery.html .. _topological comparison: https://postgis.net/docs/ST_Equals.html.. highlight:: psql .. _sql_array_comparisons: Array comparisons ================= An array comparison :ref:`operator ` test the relationship between a value and an array and return ``true``, ``false``, or ``NULL``. .. SEEALSO:: :ref:`sql_subquery_expressions` .. rubric:: Table of contents .. contents:: :local: .. _sql_in_array_comparison: ``IN (value [, ...])`` ---------------------- Syntax: .. code-block:: sql expression IN (value [, ...]) The ``IN`` :ref:`operator ` returns ``true`` if the left-hand matches at least one value contained within the right-hand side. The operator returns ``NULL`` if: - The left-hand :ref:`expression ` :ref:`evaluates ` to ``NULL`` - There are no matching right-hand values and at least one right-hand value is ``NULL`` Here's an example:: cr> SELECT ... 1 in (1, 2, 3) AS a, ... 4 in (1, 2, 3) AS b, ... 5 in (1, 2, null) as c; +------+-------+------+ | a | b | c | +------+-------+------+ | TRUE | FALSE | NULL | +------+-------+------+ SELECT 1 row in set (... sec) .. _sql_any_array_comparison: ``ANY/SOME (array expression)`` ------------------------------- Syntax: .. code-block:: sql expression ANY | SOME (array_expression) Here, ```` can be any :ref:`basic comparison operator `. An example:: cr> SELECT ... 1 = ANY ([1,2,3]) AS a, ... 4 = ANY ([1,2,3]) AS b; +------+-------+ | a | b | +------+-------+ | TRUE | FALSE | +------+-------+ SELECT 1 row in set (... sec) The ``ANY`` :ref:`operator ` returns ``true`` if the defined comparison is ``true`` for any of the values in the right-hand array :ref:`expression `. If the right side is a multi-dimension array it is automatically unnested to the required dimension. An example:: cr> SELECT ... 4 = ANY ([[1, 2], [3, 4]]) as a, ... 5 = ANY ([[1, 2], [3, 4]]) as b, ... [1, 2] = ANY ([[1,2], [3, 4]]) as c, ... [1, 3] = ANY ([[1,2], [3, 4]]) as d; +------+-------+------+-------+ | a | b | c | d | +------+-------+------+-------+ | TRUE | FALSE | TRUE | FALSE | +------+-------+------+-------+ SELECT 1 row in set (... sec) The operator returns ``false`` if the comparison returns ``false`` for all right-hand values or if there are no right-hand values. The operator returns ``NULL`` if: - The left-hand expression :ref:`evaluates ` to ``NULL`` - There are no matching right-hand values and at least one right-hand value is ``NULL`` .. TIP:: When doing ``NOT = ANY()``, query performance may be degraded because special handling is required to implement the `3-valued logic`_. To achieve better performance, consider using the :ref:`ignore3vl function `. .. _all_array_comparison: ``ALL (array_expression)`` -------------------------- Syntax: .. code-block:: sql value comparison ALL (array_expression) Here, ``comparison`` can be any :ref:`basic comparison operator `. Objects and arrays of objects are not supported for either :ref:`operand `. Here's an example:: cr> SELECT 1 <> ALL(ARRAY[2, 3, 4]) AS x; +------+ | x | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) The ``ALL`` :ref:`operator ` returns ``true`` if the defined comparison is ``true`` for all values in the right-hand :ref:`array expression `. The operator returns ``false`` if the comparison returns ``false`` for all right-hand values. The operator returns ``NULL`` if: - The left-hand expression :ref:`evaluates ` to ``NULL`` - No comparison returns ``false`` and at least one right-hand value is ``NULL`` .. _3-valued logic: https://en.wikipedia.org/wiki/Null_(SQL)#Comparisons_with_NULL_and_the_three-valued_logic_(3VL).. highlight:: psql .. _sql_subquery_expressions: Subquery expressions ==================== Some :ref:`operators ` can be used with an :ref:`uncorrelated subquery ` to form a *subquery expression* that returns a boolean value (i.e., ``true`` or ``false``) or ``NULL``. .. SEEALSO:: :ref:`SQL: Value expressions ` .. rubric:: Table of contents .. contents:: :local: .. _sql_in_subquery_expression: ``IN (subquery)`` ----------------- Syntax: .. code-block:: sql expression IN (subquery) The ``subquery`` must produce result rows with a single column only. Here's an example:: cr> select name, surname, sex from employees ... where dept_id in (select id from departments where name = 'Marketing') ... order by name, surname; +--------+----------+-----+ | name | surname | sex | +--------+----------+-----+ | David | Bowe | M | | David | Limb | M | | Sarrah | Mcmillan | F | | Smith | Clark | M | +--------+----------+-----+ SELECT 4 rows in set (... sec) The ``IN`` :ref:`operator ` returns ``true`` if any :ref:`subquery ` row equals the left-hand :ref:`operand `. Otherwise, it returns ``false`` (including the case where the subquery returns no rows). The operator returns ``NULL`` if: - The left-hand expression :ref:`evaluates ` to ``NULL`` - There are no matching right-hand values and at least one right-hand value is ``NULL`` .. NOTE:: ``IN (subquery)`` is an alias for ``= ANY (subquery)`` .. _sql_any_subquery_expression: ``ANY/SOME (subquery)`` ----------------------- Syntax: .. code-block:: sql expression comparison ANY | SOME (subquery) Here, ``comparison`` can be any :ref:`basic comparison operator `. The ``subquery`` must produce result rows with a single column only. Here's an example:: cr> select name, population from countries ... where population > any (select * from unnest([8000000, 22000000, NULL])) ... order by population, name; +--------------+------------+ | name | population | +--------------+------------+ | Austria | 8747000 | | South Africa | 55910000 | | France | 66900000 | | Turkey | 79510000 | | Germany | 82670000 | +--------------+------------+ SELECT 5 rows in set (... sec) The ``ANY`` :ref:`operator ` returns ``true`` if the defined comparison is ``true`` for any of the result rows of the right-hand :ref:`subquery `. The operator returns ``false`` if the comparison returns ``false`` for all result rows of the subquery or if the subquery returns no rows. The operator returns ``NULL`` if: - The left-hand expression :ref:`evaluates ` to ``NULL`` - There are no matching right-hand values and at least one right-hand value is ``NULL`` .. NOTE:: The following is not supported: - ``IS NULL`` or ``IS NOT NULL`` as ``comparison`` - Matching as many columns as there are expressions on the left-hand row e.g. ``(x,y) = ANY (select x, y from t)`` ``ALL (subquery)`` ------------------ Syntax: .. code-block:: sql value comparison ALL (subquery) Here, ``comparison`` can be any :ref:`basic comparison operator `. The ``subquery`` must produce result rows with a single column only. Here's an example:: cr> select 100 <> ALL (select height from sys.summits) AS x; +------+ | x | +------+ | TRUE | +------+ SELECT 1 row in set (... sec) The ``ALL`` :ref:`operator ` returns ``true`` if the defined comparison is ``true`` for all of the result rows of the right-hand :ref:`subquery `. The operator returns ``false`` if the comparison returns ``false`` for any result rows of the subquery. The operator returns ``NULL`` if: - The left-hand expression :ref:`evaluates ` to ``NULL`` - No comparison returns ``false`` and at least one right-hand value is ``NULL``.. highlight:: sh .. _conf-cluster-settings: ===================== Cluster-wide settings ===================== All current applied cluster settings can be read by querying the :ref:`sys.cluster.settings ` column. Most cluster settings can be :ref:`changed at runtime `. This is documented at each setting. .. rubric:: Table of contents .. contents:: :local: .. _applying-cluster-settings: Non-runtime cluster-wide settings --------------------------------- Cluster wide settings which cannot be changed at runtime need to be specified in the configuration of each node in the cluster. .. CAUTION:: Cluster settings specified via node configurations are required to be exactly the same on every node in the cluster for proper operation of the cluster. .. _conf_collecting_stats: Collecting stats ---------------- .. _stats.enabled: **stats.enabled** | *Default:* ``true`` | *Runtime:* ``yes`` A boolean indicating whether or not to collect statistical information about the cluster. .. CAUTION:: The collection of statistical information incurs a slight performance penalty, as details about every job and operation across the cluster will cause data to be inserted into the corresponding system tables. .. _stats.jobs_log_size: **stats.jobs_log_size** | *Default:* ``10000`` | *Runtime:* ``yes`` The maximum number of job records kept to be kept in the :ref:`sys.jobs_log ` table on each node. A job record corresponds to a single SQL statement to be executed on the cluster. These records are used for performance analytics. A larger job log produces more comprehensive stats, but uses more RAM. Older job records are deleted as newer records are added, once the limit is reached. Setting this value to ``0`` disables collecting job information. .. _stats.jobs_log_expiration: **stats.jobs_log_expiration** | *Default:* ``0s`` (disabled) | *Runtime:* ``yes`` The job record expiry time in seconds. Job records in the :ref:`sys.jobs_log ` table are periodically cleared if they are older than the expiry time. This setting overrides :ref:`stats.jobs_log_size `. If the value is set to ``0``, time based log entry eviction is disabled. .. NOTE:: If both the :ref:`stats.operations_log_size ` and :ref:`stats.operations_log_expiration ` settings are disabled, jobs will not be recorded. .. _stats.jobs_log_filter: **stats.jobs_log_filter** | *Default:* ``true`` (Include everything) | *Runtime:* ``yes`` An :ref:`expression ` to determine if a job should be recorded into ``sys.jobs_log``. The expression must :ref:`evaluate ` to a boolean. If it evaluates to ``true`` the statement will show up in ``sys.jobs_log`` until it's evicted due to one of the other rules. (expiration or size limit reached). The expression may reference all columns contained in ``sys.jobs_log``. A common use case is to include only jobs that took a certain amount of time to execute:: cr> SET GLOBAL "stats.jobs_log_filter" = $$ended - started > '5 minutes'::interval$$; SET OK, 1 row affected (... sec) .. _stats.jobs_log_persistent_filter: **stats.jobs_log_persistent_filter** | *Default:* ``false`` (Include nothing) | *Runtime:* ``yes`` An expression to determine if a job should also be recorded to the regular ``CrateDB`` log. Entries that match this filter will be logged under the ``StatementLog`` logger with the ``INFO`` level. This is similar to ``stats.jobs_log_filter`` except that these entries are persisted to the log file. This should be used with caution and shouldn't be set to an expression that matches many queries as the logging operation will block on IO and can therefore affect performance. A common use case is to use this for slow query logging. .. _stats.operations_log_size: **stats.operations_log_size** | *Default:* ``10000`` | *Runtime:* ``yes`` The maximum number of operations records to be kept in the :ref:`sys.operations_log ` table on each node. A job consists of one or more individual operations. Operations records are used for performance analytics. A larger operations log produces more comprehensive stats, but uses more RAM. Older operations records are deleted as newer records are added, once the limit is reached. Setting this value to ``0`` disables collecting operations information. .. _stats.operations_log_expiration: **stats.operations_log_expiration** | *Default:* ``0s`` (disabled) | *Runtime:* ``yes`` Entries of :ref:`sys.operations_log ` are cleared by a periodically job when they are older than the specified expire time. This setting overrides :ref:`stats.operations_log_size `. If the value is set to ``0`` the time based log entry eviction is disabled. .. NOTE:: If both settings :ref:`stats.operations_log_size ` and :ref:`stats.operations_log_expiration ` are disabled, no job information will be collected. .. _stats.service.interval: **stats.service.interval** | *Default:* ``24h`` | *Runtime:* ``yes`` Defines the refresh interval to refresh tables statistics used to produce optimal query execution plans. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). If the value provided is ``0`` then the refresh is disabled. .. CAUTION:: Using a very small value can cause a high load on the cluster. .. _stats.service.max_bytes_per_sec: **stats.service.max_bytes_per_sec** | *Default:* ``40mb`` | *Runtime:* ``yes`` Specifies the maximum number of bytes per second that can be read on data nodes to collect statistics. If this is set to a positive number, the underlying I/O operations of the :ref:`ANALYZE ` statement are throttled. If the value provided is ``0`` then the throttling is disabled. Shard limits ------------ .. _cluster.max_shards_per_node: **cluster.max_shards_per_node** | *Default:* 1000 | *Runtime:* ``yes`` The maximum number of open primary and replica shards per node. This setting is checked on a shard creation and doesn't limit shards for individual nodes. To limit the number of shards for each node, use :ref:`cluster.routing.allocation.total_shards_per_node ` setting. The actual limit being checked is ``max_shards_per_node * number of data nodes``. Any operations that would result in the creation of additional shard copies that would exceed this limit are rejected. For example. If you have 999 shards in the current cluster and you try to create a new table, the create table operation will fail. Similarly, if a write operation would lead to the creation of a new partition, the statement will fail. Each shard on a node requires some memory and increases the size of the cluster state. Having too many shards per node will impact the clusters stability and it is therefore discouraged to raise the limit above 1000. .. NOTE:: The maximum number of shards per node setting is also used for the :ref:`sys-node_checks_max_shards_per_node` check. .. NOTE:: If a table is created with :ref:`sql-create-table-number-of-replicas` provided as a range or default ``0-1`` value, the limit check accounts only for primary shards and not for possible expanded replicas and thus actual number of all shards can exceed the limit. .. _conf_usage_data_collector: Usage data collector -------------------- The settings of the Usage Data Collector are read-only and cannot be set during runtime. Please refer to :ref:`usage_data_collector` to get further information about its usage. .. _udc.enabled: **udc.enabled** | *Default:* ``true`` | *Runtime:* ``no`` ``true``: Enables the Usage Data Collector. ``false``: Disables the Usage Data Collector. .. _udc.initial_delay: **udc.initial_delay** | *Default:* ``10m`` | *Runtime:* ``no`` The delay for first ping after start-up. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _udc.interval: **udc.interval** | *Default:* ``24h`` | *Runtime:* ``no`` The interval a UDC ping is sent. This field expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _udc.url: **udc.url** | *Default:* ``https://udc.crate.io`` | *Runtime:* ``no`` The URL the ping is sent to. .. _conf_graceful_stop: Graceful stop ------------- By default, when the CrateDB process stops it simply shuts down, possibly making some shards unavailable which leads to a *red* cluster state and lets some queries fail that required the now unavailable shards. In order to *safely* shutdown a CrateDB node, the graceful stop procedure can be used. The following cluster settings can be used to change the shutdown behaviour of nodes of the cluster: .. _cluster.graceful_stop.min_availability: **cluster.graceful_stop.min_availability** | *Default:* ``primaries`` | *Runtime:* ``yes`` | *Allowed values:* ``none | primaries | full`` ``none``: No minimum data availability is required. The node may shut down even if records are missing after shutdown. ``primaries``: At least all primary shards need to be available after the node has shut down. Replicas may be missing. ``full``: All records and all replicas need to be available after the node has shut down. Data availability is full. .. NOTE:: This option is ignored if there is only 1 node in a cluster! .. _cluster.graceful_stop.timeout: **cluster.graceful_stop.timeout** | *Default:* ``2h`` | *Runtime:* ``yes`` Defines the maximum waiting time in milliseconds for the :ref:`reallocation ` process to finish. The ``force`` setting will define the behaviour when the shutdown process runs into this timeout. The timeout expects a time value either as a ``bigint`` or ``double precision`` or alternatively as a string literal with a time suffix (``ms``, ``s``, ``m``, ``h``, ``d``, ``w``). .. _cluster.graceful_stop.force: **cluster.graceful_stop.force** | *Default:* ``false`` | *Runtime:* ``yes`` Defines whether ``graceful stop`` should force stopping of the node if it runs into the timeout which is specified with the `cluster.graceful_stop.timeout`_ setting. .. _conf_bulk_operations: Bulk operations --------------- SQL DML Statements involving a huge amount of rows like :ref:`sql-copy-from`, :ref:`sql-insert` or :ref:`ref-update` can take an enormous amount of time and resources. The following settings change the behaviour of those queries. .. _bulk.request_timeout: **bulk.request_timeout** | *Default:* ``1m`` | *Runtime:* ``yes`` Defines the timeout of internal shard-based requests involved in the execution of SQL DML Statements over a huge amount of rows. .. _conf_discovery: Discovery --------- Data sharding and work splitting are at the core of CrateDB. This is how we manage to execute very fast queries over incredibly large datasets. In order for multiple CrateDB nodes to work together a cluster needs to be formed. The process of finding other nodes with which to form a cluster is called discovery. Discovery runs when a CrateDB node starts and when a node is not able to reach the master node and continues until a master node is found or a new master node is elected. .. _discovery.seed_hosts: **discovery.seed_hosts** | *Default:* ``127.0.0.1`` | *Runtime:* ``no`` In order to form a cluster with CrateDB instances running on other nodes a list of seed master-eligible nodes needs to be provided. This setting should normally contain the addresses of all the master-eligible nodes in the cluster. In order to seed the discovery process the nodes listed here must be live and contactable. This setting contains either an array of hosts or a comma-delimited string. By default a node will bind to the available loopback and scan for local ports between ``4300`` and ``4400`` to try to connect to other nodes running on the same server. This default behaviour provides local auto clustering without any configuration. Each value should be in the form of host:port or host (where port defaults to the setting ``transport.tcp.port``). .. NOTE:: IPv6 hosts must be bracketed. .. _cluster.initial_master_nodes: **cluster.initial_master_nodes** | *Default:* ``not set`` | *Runtime:* ``no`` Contains a list of node names, full-qualified hostnames or IP addresses of the master-eligible nodes which will vote in the very first election of a cluster that's bootstrapping for the first time. By default this is not set, meaning it expects this node to join an already formed cluster. In development mode, with no discovery settings configured, this step is performed by the nodes themselves, but this auto-bootstrapping is designed to aim development and is not safe for production. In production you must explicitly list the names or IP addresses of the master-eligible nodes whose votes should be counted in the very first election. .. _discovery.type: **discovery.type** | *Default:* ``zen`` | *Runtime:* ``no`` | *Allowed values:* ``zen | single-node`` Specifies whether CrateDB should form a multiple-node cluster. By default, CrateDB discovers other nodes when forming a cluster and allows other nodes to join the cluster later. If ``discovery.type`` is set to ``single-node``, CrateDB forms a single-node cluster and the node won't join any other clusters. This can be useful for testing. It is not recommend to use this for production setups. The ``single-node`` mode also skips `bootstrap checks`_. .. CAUTION:: If a node is started without any :ref:`initial_master_nodes ` or a :ref:`discovery_type ` set to ``single-node`` (e.g., the default configuration), it will never join a cluster even if the configuration is subsequently changed. It is possible to force the node to forget its current cluster state by using the :ref:`cli-crate-node` CLI tool. However, be aware that this may result in data loss. .. _conf_host_discovery: Unicast host discovery ...................... As described above, CrateDB has built-in support for statically specifying a list of addresses that will act as the seed nodes in the discovery process using the `discovery.seed_hosts`_ setting. CrateDB also has support for several different mechanisms of seed nodes discovery. Currently there are two other discovery types: via DNS and via EC2 API. When a node starts up with one of these discovery types enabled, it performs a lookup using the settings for the specified mechanism listed below. The hosts and ports retrieved from the mechanism will be used to generate a list of unicast hosts for node discovery. The same lookup is also performed by all nodes in a cluster whenever the master is re-elected (see `Cluster Meta Data`). .. _discovery.seed_providers: **discovery.seed_providers** | *Default:* ``not set`` | *Runtime:* ``no`` | *Allowed values:* ``srv``, ``ec2`` See also: `Discovery`_. .. _conf_dns_discovery: Discovery via DNS ````````````````` Crate has built-in support for discovery via DNS. To enable DNS discovery the ``discovery.seed_providers`` setting needs to be set to ``srv``. The order of the unicast hosts is defined by the priority, weight and name of each host defined in the SRV record. For example:: _crate._srv.example.com. 3600 IN SRV 2 20 4300 crate1.example.com. _crate._srv.example.com. 3600 IN SRV 1 10 4300 crate2.example.com. _crate._srv.example.com. 3600 IN SRV 2 10 4300 crate3.example.com. would result in a list of discovery nodes ordered like:: crate2.example.com:4300, crate3.example.com:4300, crate1.example.com:4300 .. _discovery.srv.query: **discovery.srv.query** | *Runtime:* ``no`` The DNS query that is used to look up SRV records, usually in the format ``_service._protocol.fqdn`` If not set, the service discovery will not be able to look up any SRV records. .. _discovery.srv.resolver: **discovery.srv.resolver** | *Runtime:* ``no`` The hostname or IP of the DNS server used to resolve DNS records. If this is not set, or the specified hostname/IP is not resolvable, the default (system) resolver is used. Optionally a custom port can be specified using the format ``hostname:port``. .. _conf_ec2_discovery: Discovery on Amazon EC2 ``````````````````````` CrateDB has built-in support for discovery via the EC2 API. To enable EC2 discovery the ``discovery.seed_providers`` settings needs to be set to ``ec2``. .. _discovery.ec2.access_key: **discovery.ec2.access_key** | *Runtime:* ``no`` The access key ID to identify the API calls. .. _discovery.ec2.secret_key: **discovery.ec2.secret_key** | *Runtime:* ``no`` The secret key to identify the API calls. Following settings control the discovery: .. _discovery.ec2.groups: **discovery.ec2.groups** | *Runtime:* ``no`` A list of security groups; either by ID or name. Only instances with the given group will be used for unicast host discovery. .. _discovery.ec2.any_group: **discovery.ec2.any_group** | *Default:* ``true`` | *Runtime:* ``no`` Defines whether all (``false``) or just any (``true``) security group must be present for the instance to be used for discovery. .. _discovery.ec2.host_type: **discovery.ec2.host_type** | *Default:* ``private_ip`` | *Runtime:* ``no`` | *Allowed values:* ``private_ip``, ``public_ip``, ``private_dns``, ``public_dns`` Defines via which host type to communicate with other instances. .. _discovery.ec2.availability_zones: **discovery.ec2.availability_zones** | *Runtime:* ``no`` A list of availability zones. Only instances within the given availability zone will be used for unicast host discovery. .. _discovery.ec2.tag.name: **discovery.ec2.tag.** | *Runtime:* ``no`` EC2 instances for discovery can also be filtered by tags using the ``discovery.ec2.tag.`` prefix plus the tag name. E.g. to filter instances that have the ``environment`` tags with the value ``dev`` your setting will look like: ``discovery.ec2.tag.environment: dev``. .. _discovery.ec2.endpoint: **discovery.ec2.endpoint** | *Runtime:* ``no`` If you have your own compatible implementation of the EC2 API service you can set the endpoint that should be used. .. _conf_routing: Routing allocation ------------------ .. _cluster.routing.allocation.enable: **cluster.routing.allocation.enable** | *Default:* ``all`` | *Runtime:* ``yes`` | *Allowed values:* ``all | none | primaries | new_primaries`` ``all`` allows all :ref:`shard allocations `, the cluster can allocate all kinds of shards. ``none`` allows no shard allocations at all. No shard will be moved or created. ``primaries`` only primaries can be moved or created. This includes existing primary shards. ``new_primaries`` allows allocations for new primary shards only. This means that for example a newly added node will not allocate any replicas. However it is still possible to allocate new primary shards for new indices. Whenever you want to perform a zero downtime upgrade of your cluster you need to set this value before gracefully stopping the first node and reset it to ``all`` after starting the last updated node. .. NOTE:: This allocation setting has no effect on the :ref:`recovery ` of primary shards! Even when ``cluster.routing.allocation.enable`` is set to ``none``, nodes will recover their unassigned local primary shards immediately after restart. .. _cluster.routing.rebalance.enable: **cluster.routing.rebalance.enable** | *Default:* ``all`` | *Runtime:* ``yes`` | *Allowed values:* ``all | none | primaries | replicas`` Enables or disables rebalancing for different types of shards: - ``all`` allows shard rebalancing for all types of shards. - ``none`` disables shard rebalancing for any types. - ``primaries`` allows shard rebalancing only for primary shards. - ``replicas`` allows shard rebalancing only for replica shards. .. _cluster.routing.allocation.allow_rebalance: **cluster.routing.allocation.allow_rebalance** | *Default:* ``indices_all_active`` | *Runtime:* ``yes`` | *Allowed values:* ``always | indices_primary_active | indices_all_active`` Defines when rebalancing will happen based on the total state of all the indices shards in the cluster. Defaults to ``indices_all_active`` to reduce chatter during initial :ref:`recovery `. .. _cluster.routing.allocation.cluster_concurrent_rebalance: **cluster.routing.allocation.cluster_concurrent_rebalance** | *Default:* ``2`` | *Runtime:* ``yes`` Defines how many concurrent rebalancing tasks are allowed across all nodes. .. _cluster.routing.allocation.node_initial_primaries_recoveries: **cluster.routing.allocation.node_initial_primaries_recoveries** | *Default:* ``4`` | *Runtime:* ``yes`` Defines how many concurrent primary shard recoveries are allowed on a node. Since primary recoveries use data that is already on disk (as opposed to inter-node recoveries), recovery should be fast and so this setting can be higher than :ref:`node_concurrent_recoveries `. .. _cluster.routing.allocation.node_concurrent_recoveries: **cluster.routing.allocation.node_concurrent_recoveries** | *Default:* ``2`` | *Runtime:* ``yes`` Defines how many concurrent recoveries are allowed on a node. .. _conf-routing-allocation-balance: Shard balancing ............... You can configure how CrateDB attempts to balance shards across a cluster by specifying one or more property *weights*. CrateDB will consider a cluster to be balanced when no further allowed action can bring the weighted properties of each node closer together. .. NOTE:: Balancing may be restricted by other settings (e.g., :ref:`attribute-based ` and :ref:`disk-based ` shard allocation). .. _cluster.routing.allocation.balance.shard: **cluster.routing.allocation.balance.shard** | *Default:* ``0.45f`` | *Runtime:* ``yes`` Defines the weight factor for shards :ref:`allocated ` on a node (float). Raising this raises the tendency to equalize the number of shards across all nodes in the cluster. .. NOTE:: :ref:`cluster.routing.allocation.balance.shard` and :ref:`cluster.routing.allocation.balance.index` cannot be both set to ``0.0f``. .. _cluster.routing.allocation.balance.index: **cluster.routing.allocation.balance.index** | *Default:* ``0.55f`` | *Runtime:* ``yes`` Defines a factor to the number of shards per index :ref:`allocated ` on a specific node (float). Increasing this value raises the tendency to equalize the number of shards per index across all nodes in the cluster. .. NOTE:: :ref:`cluster.routing.allocation.balance.shard` and :ref:`cluster.routing.allocation.balance.index` cannot be both set to ``0.0f``. .. _cluster.routing.allocation.balance.threshold: **cluster.routing.allocation.balance.threshold** | *Default:* ``1.0f`` | *Runtime:* ``yes`` Minimal optimization value of operations that should be performed (non negative float). Increasing this value will cause the cluster to be less aggressive about optimising the shard balance. .. _conf-routing-allocation-attributes: Attribute-based shard allocation ................................ You can control how shards are allocated to specific nodes by setting :ref:`custom attributes ` on each node (e.g., server rack ID or node availability zone). After doing this, you can define :ref:`cluster-wide attribute awareness ` and then configure :ref:`cluster-wide attribute filtering `. .. SEEALSO:: For an in-depth example of using custom node attributes, check out the `multi-zone setup how-to guide`_. .. _conf-routing-allocation-awareness: Cluster-wide attribute awareness ````````````````````````````````` To make use of :ref:`custom attributes ` for :ref:`attribute-based ` :ref:`shard allocation `, you must configure *cluster-wide attribute awareness*. .. _cluster.routing.allocation.awareness.attributes: **cluster.routing.allocation.awareness.attributes** | *Runtime:* ``no`` You may define :ref:`custom node attributes ` which can then be used to do awareness based on the :ref:`allocation ` of a shard and its replicas. For example, let's say we want to use an attribute named ``rack_id``. We start two nodes with ``node.attr.rack_id`` set to ``rack_one``. Then we create a single table with five shards and one replica. The table will be fully deployed on the current nodes (five shards and one replica each, making a total of 10 shards). Now, if we start two more nodes with ``node.attr.rack_id`` set to ``rack_two``, CrateDB will relocate shards to even out the number of shards across the nodes. However, a shard and its replica will not be allocated to nodes sharing the same ``rack_id`` value. The ``awareness.attributes`` setting supports using several values. .. _cluster.routing.allocation.awareness.force.\*.values: **cluster.routing.allocation.awareness.force.\*.values** | *Runtime:* ``no`` Attributes on which :ref:`shard allocation ` will be forced. Here, ``*`` is a placeholder for the awareness attribute, which can be configured using the :ref:`cluster.routing.allocation.awareness.attributes ` setting. For example, let's say we configured forced shard allocation for an awareness attribute named ``zone`` with ``values`` set to ``zone1, zone2``. Start two nodes with ``node.attr.zone`` set to ``zone1``. Then, create a table with five shards and one replica. The table will be created, but only five shards will be allocated (with no replicas). The replicas will only be allocated when we start one or more nodes with ``node.attr.zone`` set to ``zone2``. .. _conf-routing-allocation-filtering: Cluster-wide attribute filtering ```````````````````````````````` To control how CrateDB uses :ref:`custom attributes ` for :ref:`attribute-based ` :ref:`shard allocation `, you must configure *cluster-wide attribute filtering*. .. NOTE:: CrateDB will retroactively enforce filter definitions. If a new filter would prevent newly created matching shards from being allocated to a node, CrateDB would also move any *existing* matching shards away from that node. .. _cluster.routing.allocation.include.*: **cluster.routing.allocation.include.*** | *Runtime:* ``yes`` Only :ref:`allocate shards ` on nodes where at least **one** of the specified values matches the attribute. For example:: cluster.routing.allocation.include.zone: "zone1,zone2"` This setting can be overridden for individual tables by the related :ref:`table setting `. .. _cluster.routing.allocation.exclude.*: **cluster.routing.allocation.exclude.*** | *Runtime:* ``yes`` Only :ref:`allocate shards ` on nodes where **none** of the specified values matches the attribute. For example:: cluster.routing.allocation.exclude.zone: "zone1" This setting can be overridden for individual tables by the related :ref:`table setting `. Therefore, if a node is excluded from shard allocation by this cluster level setting, the node can still allocate shards if the table setting allows it. .. _cluster.routing.allocation.require.*: **cluster.routing.allocation.require.*** | *Runtime:* ``yes`` Used to specify a number of rules, which **all** of them must match for a node in order to :ref:`allocate a shard ` on it. This setting can be overridden for individual tables by the related :ref:`table setting `. .. _conf-routing-allocation-disk: Disk-based shard allocation ........................... .. _cluster.routing.allocation.disk.threshold_enabled: **cluster.routing.allocation.disk.threshold_enabled** | *Default:* ``true`` | *Runtime:* ``yes`` Prevent :ref:`shard allocation ` on nodes depending of the disk usage. .. _cluster.routing.allocation.disk.watermark.low: **cluster.routing.allocation.disk.watermark.low** | *Default:* ``85%`` | *Runtime:* ``yes`` Defines the lower disk threshold limit for :ref:`shard allocations `. New shards will not be allocated on nodes with disk usage greater than this value. It can also be set to an absolute bytes value (like e.g. ``500mb``) to prevent the cluster from allocating new shards on node with less free disk space than this value. .. _cluster.routing.allocation.disk.watermark.high: **cluster.routing.allocation.disk.watermark.high** | *Default:* ``90%`` | *Runtime:* ``yes`` Defines the higher disk threshold limit for :ref:`shard allocations `. The cluster will attempt to relocate existing shards to another node if the disk usage on a node rises above this value. It can also be set to an absolute bytes value (like e.g. ``500mb``) to relocate shards from nodes with less free disk space than this value. .. _cluster.routing.allocation.disk.watermark.flood_stage: **cluster.routing.allocation.disk.watermark.flood_stage** | *Default:* ``95%`` | *Runtime:* ``yes`` Defines the threshold on which CrateDB enforces a read-only block on every index that has at least one :ref:`shard allocated ` on a node with at least one disk exceeding the flood stage. .. NOTE:: :ref:`sql-create-table-blocks-read-only-allow-delete` setting is automatically reset to ``FALSE`` for the tables if the disk space is freed and the threshold is undershot. ``cluster.routing.allocation.disk.watermark`` settings may be defined as percentages or bytes values. However, it is not possible to mix the value types. By default, the cluster will retrieve information about the disk usage of the nodes every 30 seconds. This can also be changed by setting the `cluster.info.update.interval`_ setting. .. NOTE:: The watermark settings are also used for the :ref:`sys-node_checks_watermark_low` and :ref:`sys-node_checks_watermark_high` node check. Setting ``cluster.routing.allocation.disk.threshold_enabled`` to false will disable the allocation decider, but the node checks will still be active and warn users about running low on disk space. .. _cluster.routing.allocation.total_shards_per_node: **cluster.routing.allocation.total_shards_per_node** | *Default*: ``-1`` | *Runtime*: ``yes`` Limits the number of primary and replica shards that can be :ref:`allocated ` per node. A value of ``-1`` means unlimited. Setting this to ``1000``, for example, will prevent CrateDB from assigning more than 1000 shards per node. A node with 1000 shards would be excluded from allocation decisions and CrateDB would attempt to allocate shards to other nodes, or leave shards unassigned if no suitable node can be found. .. NOTE:: If a table is created with :ref:`sql-create-table-number-of-replicas` provided as a range or default ``0-1`` value, the limit check accounts only for primary shards and not for possible expanded replicas and thus actual number of all shards can exceed the limit. .. _indices.recovery: Recovery -------- .. _indices.recovery.max_bytes_per_sec: **indices.recovery.max_bytes_per_sec** | *Default:* ``40mb`` | *Runtime:* ``yes`` Specifies the maximum number of bytes that can be transferred during :ref:`shard recovery ` per seconds. Limiting can be disabled by setting it to ``0``. This setting allows to control the network usage of the recovery process. Higher values may result in higher network utilization, but also faster recovery process. .. _indices.recovery.retry_delay_state_sync: **indices.recovery.retry_delay_state_sync** | *Default:* ``500ms`` | *Runtime:* ``yes`` Defines the time to wait after an issue caused by cluster state syncing before retrying to :ref:`recover `. .. _indices.recovery.retry_delay_network: **indices.recovery.retry_delay_network** | *Default:* ``5s`` | *Runtime:* ``yes`` Defines the time to wait after an issue caused by the network before retrying to :ref:`recover `. .. _indices.recovery.internal_action_timeout: **indices.recovery.internal_action_timeout** | *Default:* ``15m`` | *Runtime:* ``yes`` Defines the timeout for internal requests made as part of the :ref:`recovery `. .. _indices.recovery.internal_action_long_timeout: **indices.recovery.internal_action_long_timeout** | *Default:* ``30m`` | *Runtime:* ``yes`` Defines the timeout for internal requests made as part of the :ref:`recovery ` that are expected to take a long time. Defaults to twice :ref:`internal_action_timeout `. .. _indices.recovery.recovery_activity_timeout: **indices.recovery.recovery_activity_timeout** | *Default:* ``30m`` | *Runtime:* ``yes`` :ref:`Recoveries ` that don't show any activity for more then this interval will fail. Defaults to :ref:`internal_action_long_timeout `. .. _indices.recovery.max_concurrent_file_chunks: **indices.recovery.max_concurrent_file_chunks** | *Default:* ``2`` | *Runtime:* ``yes`` Controls the number of file chunk requests that can be sent in parallel per :ref:`recovery `. As multiple recoveries are already running in parallel, controlled by :ref:`cluster.routing.allocation.node_concurrent_recoveries `, increasing this expert-level setting might only help in situations where peer recovery of a single shard is not reaching the total inbound and outbound peer recovery traffic as configured by :ref:`indices.recovery.max_bytes_per_sec `, but is CPU-bound instead, typically when using transport-level security or compression. Memory management ----------------- .. _memory.allocation.type: **memory.allocation.type** | *Default:* ``on-heap`` | *Runtime:* ``yes`` Supported values are ``on-heap`` and ``off-heap``. This influences if memory is preferably allocated in the heap space or in the off-heap/direct memory region. Setting this to ``off-heap`` doesn't imply that the heap won't be used anymore. Most allocations will still happen in the heap space but some operations will be allowed to utilize off heap buffers. .. warning:: Using ``off-heap`` is considered **experimental**. .. _memory.operation_limit: **memory.operation_limit** | *Default:* ``0`` | *Runtime:* ``yes`` Default value for the :ref:`memory.operation_limit session setting `. Changing the cluster setting will only affect new sessions, not existing sessions. Example statement to update the default value to 1 GB, i.e. 1073741824 bytes:: cr> SET GLOBAL "memory.operation_limit" = 1073741824; SET OK, 1 row affected (... sec) Operations that hit this memory limit will trigger a ``CircuitBreakingException`` that can be handled in the application to inform the user about too much memory consumption for the particular query. Query circuit breaker --------------------- The Query circuit breaker will keep track of the used memory during the execution of a query. If a query consumes too much memory or if the cluster is already near its memory limit it will terminate the query to ensure the cluster keeps working. .. _indices.breaker.query.limit: **indices.breaker.query.limit** | *Default:* ``60%`` | *Runtime:* ``yes`` Specifies the limit for the query breaker. Provided values can either be absolute values (interpreted as a number of bytes), byte sizes (like ``1mb``) or percentage of the heap size (like ``12%``). A value of ``-1`` disables breaking the circuit while still accounting memory usage. Request circuit breaker ----------------------- The request circuit breaker allows an estimation of required heap memory per request. If a single request exceeds the specified amount of memory, an exception is raised. .. _indices.breaker.request.limit: **indices.breaker.request.limit** | *Default:* ``60%`` | *Runtime:* ``yes`` Specifies the JVM heap limit for the request circuit breaker. Accounting circuit breaker -------------------------- Tracks things that are held in memory independent of queries. For example the memory used by Lucene for segments. .. _indices.breaker.accounting.limit: **indices.breaker.accounting.limit** | *Default:* ``100%`` | *Runtime:* ``yes`` Specifies the JVM heap limit for the accounting circuit breaker .. CAUTION:: This setting is deprecated and will be removed in a future release. .. _stats.breaker.log: Stats circuit breakers ---------------------- Settings that control the behaviour of the stats circuit breaker. There are two breakers in place, one for the jobs log and one for the operations log. For each of them, the breaker limit can be set. .. _stats.breaker.log.jobs.limit: **stats.breaker.log.jobs.limit** | *Default:* ``5%`` | *Runtime:* ``yes`` The maximum memory that can be used from :ref:`CRATE_HEAP_SIZE ` for the :ref:`sys.jobs_log ` table on each node. When this memory limit is reached, the job log circuit breaker logs an error message and clears the :ref:`sys.jobs_log ` table completely. .. _stats.breaker.log.operations.limit: **stats.breaker.log.operations.limit** | *Default:* ``5%`` | *Runtime:* ``yes`` The maximum memory that can be used from :ref:`CRATE_HEAP_SIZE ` for the :ref:`sys.operations_log ` table on each node. When this memory limit is reached, the operations log circuit breaker logs an error message and clears the :ref:`sys.operations_log ` table completely. Total circuit breaker --------------------- .. _indices.breaker.total.limit: **indices.breaker.total.limit** | *Default:* ``95%`` | *Runtime:* ``yes`` The maximum memory that can be used by all aforementioned circuit breakers together. Even if an individual circuit breaker doesn't hit its individual limit, queries might still get aborted if several circuit breakers together would hit the memory limit configured in ``indices.breaker.total.limit``. Thread pools ------------ Every node uses a number of thread pools to schedule operations, each pool is dedicated to specific operations. The most important pools are: * ``write``: Used for write operations like index, update or delete. The ``type`` defaults to ``fixed``. * ``search``: Used for read operations like ``SELECT`` statements. The ``type`` defaults to ``fixed``. * ``get``: Used for some specific read operations. For example on tables like ``sys.shards`` or ``sys.nodes``. The ``type`` defaults to ``fixed``. * ``refresh``: Used for :ref:`refresh operations `. The ``type`` defaults to ``scaling``. * ``generic``: For internal tasks like cluster state management. The ``type`` defaults to ``scaling``. * ``logical_replication``: For logical replication operations. The ``type`` defaults to fixed. In addition to those pools, there are also ``netty`` worker threads which are used to process network requests and many CPU bound actions like query analysis and optimization. The thread pool settings are expert settings which you generally shouldn't need to touch. They are dynamically sized depending on the number of available CPU cores. If you're running multiple services on the same machine you instead should change the :ref:`processors` setting. Increasing the number of threads for a pool can result in degraded performance due to increased context switching and higher memory footprint. If you observe idle CPU cores increasing the thread pool size is rarely the right course of action, instead it can be a sign that: - Operations are blocked on disk IO. Increasing the thread pool size could result in more operations getting queued and blocked on disk IO without increasing throughput but decreasing it due to more memory pressure and additional garbage collection activity. - Individual operations running single threaded. Not all tasks required to process a SQL statement can be further subdivided and processed in parallel, but many operations default to use one thread per shard. Because of this, you can consider increasing the number of shards of a table to increase the parallelism of a single individual statement and increase CPU core utilization. As an alternative you can try increasing the concurrency on the client side, to have CrateDB process more SQL statements in parallel. .. _thread_pool..type: **thread_pool..type** | *Runtime:* ``no`` | *Allowed values:* ``fixed | scaling`` ``fixed`` holds a fixed size of threads to handle the requests. It also has a queue for pending requests if no threads are available. ``scaling`` ensures that a thread pool holds a dynamic number of threads that are proportional to the workload. Settings for fixed thread pools ............................... If the type of a thread pool is set to ``fixed`` there are a few optional settings. .. _thread_pool..size: **thread_pool..size** | *Runtime:* ``no`` Number of threads. The default size of the different thread pools depend on the number of available CPU cores. .. _thread_pool..queue_size: **thread_pool..queue_size** | *Default write:* ``200`` | *Default search:* ``1000`` | *Default get:* ``100`` | *Runtime:* ``no`` Size of the queue for pending requests. A value of ``-1`` sets it to unbounded. If you have burst workloads followed by periods of inactivity it can make sense to increase the ``queue_size`` to allow a node to buffer more queries before rejecting new operations. But be aware, increasing the queue size if you have sustained workloads will only increase the system's memory consumption and likely degrade performance. .. _overload_protection: Overload Protection ------------------- Overload protection settings control how many resources operations like ``INSERT INTO FROM QUERY`` or ``COPY`` can use. The values here serve as a starting point for an algorithm that dynamically adapts the effective concurrency limit based on the round-trip time of requests. Whenever one of these settings is updated, the previously calculated effective concurrency is reset. Changing settings will only effect new operations, already running operations will continue with the previous settings. .. _overload_protection.dml.initial_concurrency: **overload_protection.dml.initial_concurrency** | *Default:* ``5`` | *Runtime:* ``yes`` The initial number of concurrent operations allowed per target node. .. _overload_protection.dml.min_concurrency: **overload_protection.dml.min_concurrency** | *Default:* ``1`` | *Runtime:* ``yes`` The minimum number of concurrent operations allowed per target node. .. _overload_protection.dml.max_concurrency: **overload_protection.dml.max_concurrency** | *Default:* ``100`` | *Runtime:* ``yes`` The maximum number of concurrent operations allowed per target node. .. _overload_protection.dml.queue_size: **overload_protection.dml.queue_size** | *Default:* ``25`` | *Runtime:* ``yes`` How many operations are allowed to queue up. Metadata -------- .. _cluster.info.update.interval: **cluster.info.update.interval** | *Default:* ``30s`` | *Runtime:* ``yes`` Defines how often the cluster collect metadata information (e.g. disk usages etc.) if no concrete event is triggered. .. _metadata_gateway: Metadata gateway ................ The following settings can be used to configure the behavior of the :ref:`metadata gateway `. .. _gateway.expected_nodes: **gateway.expected_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The setting ``gateway.expected_nodes`` defines the total number of nodes expected in the cluster. It is evaluated together with ``gateway.recover_after_nodes`` to decide if the cluster can start with recovery. .. CAUTION:: This setting is deprecated and will be removed in a future version. Use `gateway.expected_data_nodes`_ instead. .. _gateway.expected_data_nodes: **gateway.expected_data_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The setting ``gateway.expected_data_nodes`` defines the total number of data nodes expected in the cluster. It is evaluated together with ``gateway.recover_after_data_nodes`` to decide if the cluster can start with recovery. .. _gateway.recover_after_time: **gateway.recover_after_time** | *Default:* ``5m`` | *Runtime:* ``no`` The ``gateway.recover_after_time`` setting defines the time to wait for the number of nodes set in ``gateway.expected_data_nodes`` (or ``gateway.expected_nodes``) to become available, before starting the recovery, once the number of nodes defined in ``gateway.recover_after_data_nodes`` (or ``gateway.recover_after_nodes``) has already been reached. This setting is ignored if ``gateway.expected_data_nodes`` or ``gateway.expected_nodes`` are set to 0 or 1. It also has no effect if ``gateway.recover_after_data_nodes`` is set equal to ``gateway.expected_data_nodes`` (or ``gateway.recover_after_nodes`` is set equal to ``gateway.expected_nodes``). The cluster also proceeds to immediate recovery, and the default 5 minutes waiting time does not apply, if neither this setting nor ``expected_nodes`` and ``expected_data_nodes`` are explicitly set. .. _gateway.recover_after_nodes: **gateway.recover_after_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The ``gateway.recover_after_nodes`` setting defines the number of nodes that need to join the cluster before the cluster state recovery can start. If this setting is ``-1`` and ``gateway.expected_nodes`` is set, all nodes will need to be started before the cluster state recovery can start. Please note that proceeding with recovery when not all nodes are available could trigger the promotion of shards and the creation of new replicas, generating disk and network load, which may be unnecessary. You can use a combination of this setting with ``gateway.recovery_after_time`` to mitigate this risk. .. CAUTION:: This setting is deprecated and will be removed in CrateDB 5.0. Use `gateway.recover_after_data_nodes`_ instead. .. _gateway.recover_after_data_nodes: **gateway.recover_after_data_nodes** | *Default:* ``-1`` | *Runtime:* ``no`` The ``gateway.recover_after_data_nodes`` setting defines the number of data nodes that need to be started before the cluster state recovery can start. If this setting is ``-1`` and ``gateway.expected_data_nodes`` is set, all data nodes will need to be started before the cluster state recovery can start. Please note that proceeding with recovery when not all data nodes are available could trigger the promotion of shards and the creation of new replicas, generating disk and network load, which may be unnecessary. You can use a combination of this setting with ``gateway.recovery_after_time`` to mitigate this risk. Logical Replication ------------------- Replication process can be configured by the following settings. Settings are dynamic and can be changed in runtime. .. _replication.logical.ops_batch_size: **replication.logical.ops_batch_size** | *Default:* ``50000`` | *Min value:* ``16`` | *Runtime:* ``yes`` Maximum number of operations to replicate from the publisher cluster per poll. Represents a number to advance a sequence. .. _replication.logical.reads_poll_duration: **replication.logical.reads_poll_duration** | *Default:* ``50`` | *Runtime:* ``yes`` The maximum time (in milliseconds) to wait for changes per poll operation. When a subscriber makes another one request to a publisher, it has ``reads_poll_duration`` milliseconds to harvest changes from the publisher. .. _replication.logical.recovery.chunk_size: **replication.logical.recovery.chunk_size** | *Default:* ``1MB`` | *Min value:* ``1KB`` | *Max value:* ``1GB`` | *Runtime:* ``yes`` Chunk size to transfer files during the initial recovery of a replicating table. .. _replication.logical.recovery.max_concurrent_file_chunks: **replication.logical.recovery.max_concurrent_file_chunks** | *Default:* ``2`` | *Min value:* ``1`` | *Max value:* ``5`` | *Runtime:* ``yes`` Controls the number of file chunk requests that can be sent in parallel between clusters during the recovery. .. hide: cr> RESET GLOBAL "stats.jobs_log_filter" RESET OK, 1 row affected (... sec) cr> RESET GLOBAL "memory.operation_limit" RESET OK, 1 row affected (... sec) .. _bootstrap checks: https://crate.io/docs/crate/howtos/en/latest/admin/bootstrap-checks.html .. _multi-zone setup how-to guide: https://crate.io/docs/crate/howtos/en/latest/clustering/multi-zone-setup.html.. highlight:: sh .. vale off .. _conf-node-settings: ====================== Node-specific settings ====================== .. rubric:: Table of contents .. contents:: :local: Basics ====== .. _cluster.name: **cluster.name** | *Default:* ``crate`` | *Runtime:* ``no`` The name of the CrateDB cluster the node should join to. .. _node.name: **node.name** | *Runtime:* ``no`` The name of the node. If no name is configured a random one will be generated. .. NOTE:: Node names must be unique in a CrateDB cluster. .. _node.store_allow_mmap: **node.store.allow_mmap** | *Default:* ``true`` | *Runtime:* ``no`` The setting indicates whether or not memory-mapping is allowed. Node types ========== CrateDB supports different types of nodes. The following settings can be used to differentiate nodes upon startup: .. _node.master: **node.master** | *Default:* ``true`` | *Runtime:* ``no`` Whether or not this node is able to get elected as *master* node in the cluster. .. _node.data: **node.data** | *Default:* ``true`` | *Runtime:* ``no`` Whether or not this node will store data. Using different combinations of these two settings, you can create four different types of node. Each type of node is differentiated by what types of load it will handle. Tabulating the truth values for ``node.master`` and ``node.data`` produces a truth table outlining the four different types of node: +---------------+-----------------------------+------------------------------+ | | **Master** | **No master** | +---------------+-----------------------------+------------------------------+ | **Data** | Handle all loads. | Handles client requests and | | | | query execution. | +---------------+-----------------------------+------------------------------+ | **No data** | Handles cluster management. | Handles client requests. | +---------------+-----------------------------+------------------------------+ Nodes marked as ``node.master`` will only handle cluster management if they are elected as the cluster master. All other loads are shared equally. General ======= .. _node.sql.read_only: **node.sql.read_only** | *Default:* ``false`` | *Runtime:* ``no`` If set to ``true``, the node will only allow SQL statements which are resulting in read operations. .. _statement_timeout: **statement_timeout** | *Default:* ``0`` | *Runtime:* ``yes`` The maximum duration of any statement before it gets cancelled. This value is used as default value for the :ref:`statement_timeout session setting ` If ``0`` queries are allowed to run infinitely and don't get cancelled automatically. .. NOTE:: Updating this setting won't affect existing sessions, it will only take effect for new sessions. Networking ========== .. _conf_hosts: Hosts ----- .. _network.host: **network.host** | *Default:* ``_local_`` | *Runtime:* ``no`` The IP address CrateDB will bind itself to. This setting sets both the `network.bind_host`_ and `network.publish_host`_ values. .. _network.bind_host: **network.bind_host** | *Default:* ``_local_`` | *Runtime:* ``no`` This setting determines to which address CrateDB should bind itself to. .. _network.publish_host: **network.publish_host** | *Default:* ``_local_`` | *Runtime:* ``no`` This setting is used by a CrateDB node to publish its own address to the rest of the cluster. .. TIP:: Apart from IPv4 and IPv6 addresses there are some special values that can be used for all above settings: ========================= ================================================= ``_local_`` Any loopback addresses on the system, for example ``127.0.0.1``. ``_site_`` Any site-local addresses on the system, for example ``192.168.0.1``. ``_global_`` Any globally-scoped addresses on the system, for example ``8.8.8.8``. ``_[INTERFACE]_`` Addresses of a network interface, for example ``_en0_``. ========================= ================================================= .. _conf_ports: Ports ----- .. _http.port: **http.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB HTTP service will be bound to. It defaults to ``4200-4300``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. The HTTP protocol is used for the REST endpoint which is used by all clients except the Java client. .. _http.publish_port: **http.publish_port** | *Runtime:* ``no`` The port HTTP clients should use to communicate with the node. It is necessary to define this setting if the bound HTTP port (``http.port``) of the node is not directly reachable from outside, e.g. running it behind a firewall or inside a Docker container. .. _transport.tcp.port: **transport.tcp.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB transport service will be bound to. It defaults to ``4300-4400``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. The transport protocol is used for internal node-to-node communication. .. _transport.publish_port: **transport.publish_port** | *Runtime:* ``no`` The port that the node publishes to the cluster for its own discovery. It is necessary to define this setting when the bound tranport port (``transport.tcp.port``) of the node is not directly reachable from outside, e.g. running it behind a firewall or inside a Docker container. .. _psql.port: **psql.port** | *Runtime:* ``no`` This defines the TCP port range to which the CrateDB Postgres service will be bound to. It defaults to ``5432-5532``. Always the first free port in this range is used. If this is set to an integer value it is considered as an explicit single port. Advanced TCP settings --------------------- Any interface that uses TCP (Postgres wire, HTTP & Transport protocols) shares the following settings: .. _network.tcp.no_delay: **network.tcp.no_delay** | *Default:* ``true`` | *Runtime:* ``no`` Enable or disable the `Nagle's algorithm`_ for buffering TCP packets. Buffering is disabled by default. .. _network.tcp.keep_alive: **network.tcp.keep_alive** | *Default:* ``true`` | *Runtime:* ``no`` Configures the ``SO_KEEPALIVE`` option for sockets, which determines whether they send TCP keepalive probes. .. _network.tcp.reuse_address: **network.tcp.reuse_address** | *Default:* ``true`` on non-windows machines and ``false`` otherwise | *Runtime:* ``no`` Configures the ``SO_REUSEADDRS`` option for sockets, which determines whether they should reuse the address. .. _network.tcp.send_buffer_size: **network.tcp.send_buffer_size** | *Default:* ``-1`` | *Runtime:* ``no`` The size of the TCP send buffer (`SO_SNDBUF`_ socket option). By default not explicitly set. .. _network.tcp.receive_buffer_size: **network.tcp.receive_buffer_size** | *Default:* ``-1`` | *Runtime:* ``no`` The size of the TCP receive buffer (`SO_RCVBUF`_ socket option). By default not explicitly set. .. NOTE:: Each setting in this section has its counterpart for HTTP and transport. To provide a protocol specific setting, remove ``network`` prefix and use either ``http`` or ``transport`` instead. For example, no_delay can be configured as ``http.tcp.no_delay`` and ``transport.tcp.no_delay``. Please note, that PG interface takes its settings from transport. Transport settings ------------------ .. _transport.connect_timeout: **transport.connect_timeout** | *Default:* ``30s`` | *Runtime:* ``no`` The connect timeout for initiating a new connection. .. _transport.compress: **transport.compress** | *Default:* ``false`` | *Runtime:* ``no`` Set to `true` to enable compression (DEFLATE) between all nodes. .. _transport.ping_schedule: **transport.ping_schedule** | *Default:* ``-1`` | *Runtime:* ``no`` Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to `-1` (disabled). It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections. Paths ===== .. NOTE:: Relative paths are relative to :ref:`CRATE_HOME `. Absolute paths override this behavior. .. _path.conf: **path.conf** | *Default:* ``config`` | *Runtime:* ``no`` Filesystem path to the directory containing the configuration files ``crate.yml`` and ``log4j2.properties``. .. _path.data: **path.data** | *Default:* ``data`` | *Runtime:* ``no`` Filesystem path to the directory where this CrateDB node stores its data (table data and cluster metadata). Multiple paths can be set by using a comma separated list and each of these paths will hold full shards (instead of striping data across them). For example: .. code-block:: yaml path.data: /path/to/data1,/path/to/data2 When CrateDB finds striped shards at the provided locations (from CrateDB <0.55.0), these shards will be migrated automatically on startup. .. _path.logs: **path.logs** | *Default:* ``logs`` | *Runtime:* ``no`` Filesystem path to a directory where log files should be stored. Can be used as a variable inside ``log4j2.properties``. For example: .. code-block:: yaml appender: file: file: ${path.logs}/${cluster.name}.log .. _path.repo: **path.repo** | *Runtime:* ``no`` A list of filesystem or UNC paths where repositories of type :ref:`sql-create-repo-fs` may be stored. Without this setting a CrateDB user could write snapshot files to any directory that is writable by the CrateDB process. To safeguard against this security issue, the possible paths have to be whitelisted here. See also :ref:`location ` setting of repository type ``fs``. .. SEEALSO:: :ref:`blobs.path ` Plug-ins ======== .. _plugin.mandatory: **plugin.mandatory** | *Runtime:* ``no`` A list of plug-ins that are required for a node to startup. If any plug-in listed here is missing, the CrateDB node will fail to start. CPU === .. _processors: **processors** | *Runtime:* ``no`` The number of processors is used to set the size of the thread pools CrateDB is using appropriately. If not set explicitly, CrateDB will infer the number from the available processors on the system. In environments where the CPU amount can be restricted (like Docker) or when multiple CrateDB instances are running on the same hardware, the inferred number might be too high. In such a case, it is recommended to set the value explicitly. Memory ====== .. _bootstrap.memory_lock: **bootstrap.memory_lock** | *Default:* ``false`` | *Runtime:* ``no`` CrateDB performs poorly when the JVM starts swapping: you should ensure that it *never* swaps. If set to ``true``, CrateDB will use the ``mlockall`` system call on startup to ensure that the memory pages of the CrateDB process are locked into RAM. Garbage collection ================== CrateDB logs if JVM garbage collection on different memory pools takes too long. The following settings can be used to adjust these timeouts: .. _monitor.jvm.gc.collector.young.warn: **monitor.jvm.gc.collector.young.warn** | *Default:* ``1000ms`` | *Runtime:* ``no`` CrateDB will log a warning message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.young.info: **monitor.jvm.gc.collector.young.info** | *Default:* ``700ms`` | *Runtime:* ``no`` CrateDB will log an info message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.young.debug: **monitor.jvm.gc.collector.young.debug** | *Default:* ``400ms`` | *Runtime:* ``no`` CrateDB will log a debug message if it takes more than the configured timespan to collect the *Eden Space* (heap). .. _monitor.jvm.gc.collector.old.warn: **monitor.jvm.gc.collector.old.warn** | *Default:* ``10000ms`` | *Runtime:* ``no`` CrateDB will log a warning message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). .. _monitor.jvm.gc.collector.old.info: **monitor.jvm.gc.collector.old.info** | *Default:* ``5000ms`` | *Runtime:* ``no`` CrateDB will log an info message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). .. _monitor.jvm.gc.collector.old.debug: **monitor.jvm.gc.collector.old.debug** | *Default:* ``2000ms`` | *Runtime:* ``no`` CrateDB will log a debug message if it takes more than the configured timespan to collect the *Old Gen* / *Tenured Gen* (heap). Authentication ============== .. _host_based_auth: Trust authentication -------------------- .. _auth.trust.http_default_user: **auth.trust.http_default_user** | *Default:* ``crate`` | *Runtime:* ``no`` The default user that should be used for authentication when clients connect to CrateDB via HTTP protocol and they do not specify a user via the ``Authorization`` request header. .. _auth.trust.http_support_x_real_ip: **auth.trust.http_support_x_real_ip** | *Default:* ``false`` | *Runtime:* ``no`` If enabled, the HTTP transport will trust the ``X-Real-IP`` header sent by the client to determine the client's IP address. This is useful when CrateDB is running behind a reverse proxy or load-balancer. For improved security, any ``_local_`` IP address (``127.0.0.1`` and ``::1``) defined in this header will be ignored. .. warning:: Enabling this setting can be a security risk, as it allows clients to impersonate other clients by sending a fake ``X-Real-IP`` header. Host-based authentication ------------------------- Authentication settings (``auth.host_based.*``) are node settings, which means that their values apply only to the node where they are applied and different nodes may have different authentication settings. .. _auth.host_based.enabled: **auth.host_based.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Setting to enable or disable Host Based Authentication (HBA). It is disabled by default. .. _jwt_defaults: JWT Based Authentication ........................ Default global settings for the :ref:`JWT authentication `. .. _auth.host_based.jwt.iss: **auth.host_based.jwt.iss** | *Runtime:* ``no`` Default value for the ``iss`` :ref:`JWT property `. If ``iss`` is set, user specific JWT properties are ignored. .. _auth.host_based.jwt.aud: **auth.host_based.jwt.aud** | *Runtime:* ``no`` Default value for the ``aud`` :ref:`JWT property `. If ``aud`` is set but ``iss`` is not, then global config is not complete and user specific JWT properties are used. HBA entries ........... The ``auth.host_based.config.`` setting is a group setting that can have zero, one or multiple groups that are defined by their group key (``${order}``) and their fields (``user``, ``address``, ``method``, ``protocol``, ``ssl``). .. _$(order): **${order}:** | An identifier that is used as a natural order key when looking up the host | based configuration entries. For example, an order key of ``a`` will be | looked up before an order key of ``b``. This key guarantees that the entry | lookup order will remain independent from the insertion order of the | entries. The :ref:`admin_hba` setting is a list of predicates that users can specify to restrict or allow access to CrateDB. The meaning of the fields of the are as follows: .. _auth.host_based.config.${order}.user: **auth.host_based.config.${order}.user** | *Runtime:* ``no`` | Specifies an existing CrateDB username, only ``crate`` user (superuser) is | available. If no user is specified in the entry, then all existing users | can have access. .. _auth.host_based.config.${order}.address: **auth.host_based.config.${order}.address** | *Runtime:* ``no`` | The client machine addresses that the client matches, and which are allowed | to authenticate. This field may contain an IPv4 address, an IPv6 address or | an IPv4 CIDR mask. For example: ``127.0.0.1`` or ``127.0.0.1/32``. It also | may contain a hostname or the special ``_local_`` notation which will match | both IPv4 and IPv6 connections from localhost. A hostname specification | that starts with a dot (.) matches a suffix of the actual hostname. | So .crate.io would match foo.crate.io but not just crate.io. If no address | is specified in the entry, then access to CrateDB is open for all hosts. .. _auth.host_based.config.${order}.method: **auth.host_based.config.${order}.method** | *Runtime:* ``no`` | The authentication method to use when a connection matches this entry. | Valid values are ``trust``, ``cert``, ``password`` and ``jwt``. If no | method is specified, the ``trust`` method is used by default. | See :ref:`auth_trust`, :ref:`auth_cert`, :ref:`auth_password` and | :ref:`auth_jwt` for more information about these methods. .. _auth.host_based.config.${order}.protocol: **auth.host_based.config.${order}.protocol** | *Runtime:* ``no`` | Specifies the protocol for which the authentication entry should be used. | If no protocol is specified, then this entry will be valid for all | protocols that rely on host based authentication see :ref:`auth_trust`). .. _auth.host_based.config.${order}.ssl: **auth.host_based.config.${order}.ssl** | *Default:* ``optional`` | *Runtime:* ``no`` | Specifies whether the client must use SSL/TLS to connect to the cluster. | If set to ``on`` then the client must be connected through SSL/TLS | otherwise is not authenticated. If set to ``off`` then the client must | *not* be connected via SSL/TLS otherwise is not authenticated. Finally | ``optional``, which is the value when the option is completely skipped, | means that the client can be authenticated regardless of SSL/TLS is used | or not. Example of config groups: .. code-block:: yaml auth.host_based.config: entry_a: user: crate address: 127.16.0.0/16 entry_b: method: trust entry_3: user: crate address: 172.16.0.0/16 method: trust protocol: pg ssl: on .. _ssl_config: Secured communications (SSL/TLS) ================================ Secured communications via SSL allows you to encrypt traffic between CrateDB nodes and clients connecting to them. Connections are secured using Transport Layer Security (TLS). .. _ssl.http.enabled: **ssl.http.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Set this to true to enable secure communication between the CrateDB node and the client through SSL via the HTTPS protocol. .. _ssl.psql.enabled: **ssl.psql.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Set this to true to enable secure communication between the CrateDB node and the client through SSL via the PostgreSQL wire protocol. .. _ssl.transport.mode: **ssl.transport.mode** | *Default:* ``legacy`` | *Runtime:* ``no`` For communication between nodes, choose: ``off`` SSL cannot be used ``legacy`` SSL is not used. If HBA is enabled, transport connections won't be verified Any reachable host can establish a connection. ``on`` SSL must be used .. _ssl.keystore_filepath: **ssl.keystore_filepath** | *Runtime:* ``no`` The full path to the node keystore file. .. _ssl.keystore_password: **ssl.keystore_password** | *Runtime:* ``no`` The password used to decrypt the keystore file defined with ``ssl.keystore_filepath``. .. _ssl.keystore_key_password: **ssl.keystore_key_password** | *Runtime:* ``no`` The password entered at the end of the ``keytool -genkey command``. .. NOTE:: Optionally trusted CA certificates can be stored separately from the node's keystore into a truststore for CA certificates. .. _ssl.truststore_filepath: **ssl.truststore_filepath** | *Runtime:* ``no`` The full path to the node truststore file. If not defined, then only a keystore will be used. .. _ssl.truststore_password: **ssl.truststore_password** | *Runtime:* ``no`` The password used to decrypt the truststore file defined with ``ssl.truststore_filepath``. .. _ssl.resource_poll_interval: **ssl.resource_poll_interval** | *Default:* ``5m`` | *Runtime:* ``no`` The frequency at which SSL files such as keystore and truststore are polled for changes. Cross-origin resource sharing (CORS) ==================================== Many browsers support the `same-origin policy`_ which requires web applications to explicitly allow requests across origins. The `cross-origin resource sharing`_ settings in CrateDB allow for configuring these. .. _http.cors.enabled: **http.cors.enabled** | *Default:* ``false`` | *Runtime:* ``no`` Enable or disable `cross-origin resource sharing`_. .. _http.cors.allow-origin: **http.cors.allow-origin** | *Default:* ```` | *Runtime:* ``no`` Define allowed origins of a request. ``*`` allows *any* origin (which can be a substantial security risk) and by prepending a ``/`` the string will be treated as a :ref:`regular expression `. For example ``/https?:\/\/crate.io/`` will allow requests from ``https://crate.io`` and ``https://crate.io``. This setting disallows any origin by default. .. _http.cors.max-age: **http.cors.max-age** | *Default:* ``1728000`` (20 days) | *Runtime:* ``no`` Max cache age of a preflight request in seconds. .. _http.cors.allow-methods: **http.cors.allow-methods** | *Default:* ``OPTIONS, HEAD, GET, POST, PUT, DELETE`` | *Runtime:* ``no`` Allowed HTTP methods. .. _http.cors.allow-headers: **http.cors.allow-headers** | *Default:* ``X-Requested-With, Content-Type, Content-Length`` | *Runtime:* ``no`` Allowed HTTP headers. .. _http.cors.allow-credentials: **http.cors.allow-credentials** | *Default:* ``false`` | *Runtime:* ``no`` Add the ``Access-Control-Allow-Credentials`` header to responses. .. _`same-origin policy`: https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy .. _`cross-origin resource sharing`: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS Blobs ===== .. _blobs.path: **blobs.path** | *Runtime:* ``no`` Path to a filesystem directory where to store blob data allocated for this node. By default blobs will be stored under the same path as normal data. A relative path value is interpreted as relative to ``CRATE_HOME``. .. _ref-configuration-repositories: Repositories ============ Repositories are used to :ref:`backup ` a CrateDB cluster. .. _repositories.url.allowed_urls: **repositories.url.allowed_urls** | *Runtime:* ``no`` This setting only applies to repositories of type :ref:`sql-create-repo-url`. With this setting a list of urls can be specified which are allowed to be used if a repository of type ``url`` is created. Wildcards are supported in the host, path, query and fragment parts. This setting is a security measure to prevent access to arbitrary resources. In addition, the supported protocols can be restricted using the :ref:`repositories.url.supported_protocols ` setting. .. _repositories.url.supported_protocols: **repositories.url.supported_protocols** | *Default:* ``http``, ``https``, ``ftp``, ``file`` and ``jar`` | *Runtime:* ``no`` A list of protocols that are supported by repositories of type :ref:`sql-create-repo-url`. The ``jar`` protocol is used to access the contents of jar files. For more info, see the java `JarURLConnection documentation`_. See also the :ref:`path.repo ` Setting. .. _`JarURLConnection documentation`: https://docs.oracle.com/javase/8/docs/api/java/net/JarURLConnection.html Queries ======= .. _indices.query.bool.max_clause_count: **indices.query.bool.max_clause_count** | *Default:* ``8192`` | *Runtime:* ``no`` This setting limits the number of boolean clauses that can be generated by ``!= ANY()``, ``LIKE ANY()``, ``ILIKE ANY()``, ``NOT LIKE ANY()`` and ``NOT ILIKE ANY()`` :ref:`operators ` on arrays in order to prevent users from executing queries that may result in heavy memory consumption causing nodes to crash with ``OutOfMemory`` exceptions. Throws ``TooManyClauses`` errors when the limit is exceeded. .. NOTE:: You can avoid ``TooManyClauses`` errors by increasing this setting. The number of boolean clauses used can be larger than the elements of the array . Legacy ======= .. _legacy.table_function_column_naming: **legacy.table_function_column_naming** | *Default:* ``false`` | *Runtime:* ``no`` Since CrateDB 5.0.0, if the table function is not aliased and is returning a single base data typed column, the table function name is used as the column name. This setting can be set in order to use the naming convention prior to 5.0.0. The following table functions are affected by this setting: - :ref:`unnest ` - :ref:`regexp_matches ` - :ref:`generate_series ` When the setting is set and a single column is expected to be returned, the returned column will be named ``col1``, ``groups``, or ``col1`` respectively. .. NOTE:: Beware that if not all nodes in the cluster are consistently set or unset, the behaviour will depend on the node handling the query. .. _conf-node-lang-js: JavaScript language =================== .. _lang.js.enabled: **lang.js.enabled** | *Default:* ``true`` | *Runtime:* ``no`` Setting to enable or disable :ref:`JavaScript UDF ` support. .. _conf-fdw: Foreign Data Wrappers ===================== .. _fdw.allow_local: **fdw.allow_local** | *Default:* ``false`` | *Runtime:* ``no`` Allow access to local addresses via :ref:`Foreign data wrappers ` for all users. By default, only the ``crate`` superuser is allowed to access foreign servers that point to ``localhost``. .. warning:: Changing this to ``true`` can pose a security risk if you do not trust the users with ``AL`` permissions on the system. They can create foreign servers, foreign tables and user mappings that allow them to access services running on the same machine as CrateDB as if connected locally - effectively bypassing any restrictions set up via :ref:`admin_hba`. Do **not** change this if you don't understand the implications. .. _conf-node-attributes: Custom attributes ================= The ``node.attr`` namespace is a bag of custom attributes. Custom attributes can be :ref:`used to control shard allocation `. You can create any attribute you want under this namespace, like ``node.attr.key: value``. These attributes use the ``node.attr`` namespace to distinguish them from core node attribute like ``node.name``. Custom attributes are not validated by CrateDB, unlike core node attributes. .. vale on .. _plugins: https://github.com/crate/crate/blob/master/devs/docs/plugins.rst .. _Nagle's algorithm: https://en.wikipedia.org/wiki/Nagle%27s_algorithm .. _SO_RCVBUF: https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_RCVBUF .. _SO_SNDBUF: https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_SNDBUF(cluster-api-cli)= # CrateDB Cluster CLI The `ctk cluster {start,info,stop}` subcommands provide higher level CLI entrypoints to start/deploy/resume a database cluster, inquire information about it, and stop/suspend it again. The subsystem is implemented on top of the {ref}`croud:index` application, which gets installed along the lines and is used later on this page. ## Install We recommend using the [uv] package manager to install the application per `uv tool install`. Otherwise, using `pipx install` or `pip install --user` are viable alternatives. ```shell uv tool install --upgrade 'cratedb-toolkit' ``` ## Authenticate When working with [CrateDB Cloud], you can select between two authentication variants. Either _interactively authorize_ your terminal session using `croud login`, ```shell croud login --idp {cognito,azuread,github,google} ``` or provide API access credentials per environment variables for _headless/unattended operations_ after creating them using the [CrateDB Cloud Console] or `croud api-keys create`. ```shell # CrateDB Cloud API credentials. export CRATEDB_CLOUD_API_KEY='' export CRATEDB_CLOUD_API_SECRET='' ``` ## Configure The `ctk cluster` subcommand accepts configuration settings per CLI options and environment variables. :::{include} ../cluster/_address.md ::: ## Usage Start or resume a cluster, deploying it on demand if it doesn't exist. ```shell ctk cluster start --cluster-name hotzenplotz ``` Display cluster information. ```shell ctk cluster info --cluster-name hotzenplotz ``` Stop (suspend) a cluster. ```shell ctk cluster stop --cluster-name hotzenplotz ``` :::{seealso} {ref}`cluster-api-tutorial` includes a full end-to-end tutorial. ::: [CrateDB Cloud]: https://cratedb.com/docs/cloud/ [CrateDB Cloud Console]: https://console.cratedb.cloud/ [uv]: https://docs.astral.sh/uv/(cluster-api-python)= # CrateDB Cluster Python API The `cratedb_toolkit.ManagedCluster` class provides the higher level API/SDK entrypoints to start/deploy/resume a database cluster, inquire information about it, and stop/suspend it again. The subsystem is implemented on top of the {ref}`croud:index` application, which gets installed along the lines and is used later on this page. ## Install We recommend using the [uv] package manager to install the framework per `uv pip install`, or add it to your application using `uv add`. Otherwise, using `pip install` is a viable alternative. ```shell uv pip install --upgrade 'cratedb-toolkit' ``` ## Authenticate When working with [CrateDB Cloud], you can select between two authentication variants. Either _interactively authorize_ your terminal session using `croud login`, ```shell croud login --idp {cognito,azuread,github,google} ``` or provide API access credentials per environment variables for _headless/unattended operations_ after creating them using the [CrateDB Cloud Console] or `croud api-keys create`. ```shell # CrateDB Cloud API credentials. export CRATEDB_CLOUD_API_KEY='' export CRATEDB_CLOUD_API_SECRET='' ``` ## Configure `ManagedCluster` accepts configuration settings per constructor parameters or environment variables. :Environment variables: `CRATEDB_CLUSTER_ID`, `CRATEDB_CLUSTER_NAME`, `CRATEDB_CLUSTER_URL` :::{note} - All address options are mutually exclusive. - The cluster identifier takes precedence over the cluster name. - The cluster url takes precedence over the cluster id and name. - Environment variables can be stored into an `.env` file in your working directory. ::: ## Usage Acquire a database cluster handle, and run database workload. ```python from pprint import pprint from cratedb_toolkit import ManagedCluster # Connect to CrateDB Cloud and run the database workload. with ManagedCluster.from_env() as cluster: pprint(cluster.query("SELECT * from sys.summits LIMIT 2;")) ``` By default, the cluster will spin up, but not shut down after exiting the context manager. If you want to do it, use the `stop_on_exit=True` option. ```python from cratedb_toolkit import ManagedCluster with ManagedCluster.from_env(stop_on_exit=True) as cluster: # ... ``` :::{seealso} {ref}`cluster-api-tutorial` includes a full end-to-end tutorial. ::: [CrateDB Cloud]: https://cratedb.com/docs/cloud/ [CrateDB Cloud Console]: https://console.cratedb.cloud/ [uv]: https://docs.astral.sh/uv/(cluster-api-tutorial)= # CrateDB Cluster CLI/API Tutorial This tutorial outlines end-to-end examples connecting to the CrateDB Cloud API and the CrateDB database cluster. It includes examples about both the {ref}`cluster-api-cli` and the {ref}`cluster-api-python`. ## Configure It needs all relevant access credentials and configuration settings outlined below. This example uses environment variables stored into an `.env` file. ```shell cat > .env << EOF # Connect to managed CrateDB. # CrateDB Cloud API credentials. CRATEDB_CLOUD_API_KEY='' CRATEDB_CLOUD_API_SECRET='' # CrateDB Cloud cluster identifier (id or name). # CRATEDB_CLUSTER_ID='' CRATEDB_CLUSTER_NAME='' EOF ``` ## CLI Use the {ref}`cluster-api`'s {ref}`ctk cluster ` command to deploy a database cluster, the `ctk load table` command of the {ref}`io-subsystem` to import data, and the {ref}`shell` command for executing an SQL statement. ```shell ctk cluster start ctk load table "https://cdn.crate.io/downloads/datasets/cratedb-datasets/machine-learning/timeseries/nab-machine-failure.csv" ctk shell --command 'SELECT * FROM "nab-machine-failure" LIMIT 10;' ctk cluster stop ``` ## Python API Use the Python API to deploy, import, and query data. ```python from pprint import pprint from cratedb_toolkit import InputOutputResource, ManagedCluster # Define data source. url = "https://cdn.crate.io/downloads/datasets/cratedb-datasets/machine-learning/timeseries/nab-machine-failure.csv" source = InputOutputResource(url=url) # Connect to CrateDB Cloud. with ManagedCluster.from_env() as cluster: # Invoke the import job. cluster.load_table(source=source) # Query imported data. results = cluster.query('SELECT * FROM "nab-machine-failure" LIMIT 10;') pprint(results) ```(connect)= # Connect to a CrateDB cluster This documentation section is about connecting your applications to CrateDB and CrateDB Cloud, using database drivers, and compatibility-adapters and -dialects. ## Protocol Support CrateDB supports both the [HTTP protocol] and the [PostgreSQL wire protocol], which ensures that many clients that work with PostgreSQL, will also work with CrateDB. Through corresponding drivers, CrateDB is compatible with [ODBC], [JDBC], and other database API specifications. While we generally recommend the PostgreSQL interface (PG) for maximum compatibility in PostgreSQL environments, the HTTP interface supports [CrateDB bulk operations] and [CrateDB BLOBs], which are not supported by the PostgreSQL protocol. The HTTP protocol can also be used to connect from environments where PostgreSQL-based communication is not applicable. ## Configure In order to connect to CrateDB, your application or driver needs to be configured with corresponding connection properties. Please note that different applications and drivers may obtain connection properties in different formats. ::::::{tab-set} :::::{tab-item} CrateDB and CrateDB Cloud ::::{grid} :margin: 0 :padding: 0 :::{grid-item} :columns: 4 :margin: 0 :padding: 0 **Connection properties** :Host: ``.cratedb.net :Port: 5432 (PostgreSQL) or
4200 (HTTP) :User: `` :Pass: `` ::: :::{grid-item} :columns: 8 :margin: 0 :padding: 0 :class: driver-slim **Connection-string examples**

**Native PostgreSQL, psql** ``` postgresql://:@.cratedb.net:5432/doc ``` **JDBC: PostgreSQL pgJDBC** ``` jdbc:postgresql://:@.cratedb.net:5432/doc ``` **JDBC: CrateDB JDBC, e.g. Apache Flink** ``` jdbc:crate://:@.cratedb.net:5432/doc ``` **HTTP: Admin UI, CLI, CrateDB drivers** ``` https://:@.cratedb.net:4200/ ``` **SQLAlchemy** ``` crate://:@.cratedb.net:4200/?schema=doc&ssl=true ``` ::: :::: ::::: :::::{tab-item} CrateDB on localhost ::::{grid} :margin: 0 :padding: 0 :::{grid-item} :columns: 4 :margin: 0 :padding: 0 **Connection properties** :Host: localhost :Port: 5432 (PostgreSQL) or
4200 (HTTP) :User: `crate` :Pass: (empty) ::: :::{grid-item} :columns: 8 :margin: 0 :padding: 0 :class: driver-slim **Connection-string examples**

**Native PostgreSQL, psql** ``` postgresql://crate@localhost:5432/doc ``` **JDBC: PostgreSQL pgJDBC** ``` jdbc:crate://crate@localhost:5432/doc ``` **JDBC: CrateDB JDBC, e.g. Apache Flink** ``` jdbc:crate://:@localhost:5432/doc ``` **HTTP: Admin UI, CLI, CrateDB drivers** ``` http://crate@localhost:4200/ ``` **SQLAlchemy** ``` crate://crate@localhost:4200/?schema=doc ``` ::: :::: ::::: :::::: ```{tip} - CrateDB's fixed catalog name is `crate`, the default schema name is `doc`. - CrateDB does not implement the notion of a database, however tables can be created in different [schemas]. - When asked for a *database name*, specifying a schema name (any), or the fixed catalog name `crate` may be applicable. - If a database-/schema-name is omitted while connecting, the PostgreSQL drivers may default to the "username". - The predefined [superuser] on an unconfigured CrateDB cluster is called `crate`, defined without a password. - For authenticating properly, please learn about the available [authentication] options. ``` ## Client Libraries This section lists drivers and adapters for relevant programming languages, frameworks, and environments. ### PostgreSQL The drivers listed in this section all use the [CrateDB PostgreSQL interface]. ::::{sd-table} :widths: 2 3 5 2 :row-class: top-border :::{sd-row} ```{sd-item} ``` ```{sd-item} **Driver/Adapter** ``` ```{sd-item} **Description** ``` ```{sd-item} **Info** ``` ::: :::{sd-row} ```{sd-item} \- ``` ```{sd-item} [PostgreSQL ODBC](https://odbc.postgresql.org/) ``` ```{sd-item} The official PostgreSQL ODBC Driver. For connecting to CrateDB from any environment that supports it. ``` ```{sd-item} ``` ::: :::{sd-row} ```{sd-item} .NET ``` ```{sd-item} [Npgsql](https://www.npgsql.org/) ``` ```{sd-item} An open source ADO.NET Data Provider for PostgreSQL, for program written in C#, Visual Basic, and F#. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/npgsql/npgsql?label=latest)](https://github.com/npgsql/npgsql) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/csharp-npgsql) ``` ::: :::{sd-row} ```{sd-item} .NET ``` ```{sd-item} [CrateDB Npgsql fork](https://cratedb.com/docs/npgsql/) ``` ```{sd-item} This fork of the official driver was needed prior to CrateDB 4.2. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/npgsql?label=latest)](https://github.com/crate/npgsql) ``` ::: :::{sd-row} ```{sd-item} Golang ``` ```{sd-item} [pgx](https://github.com/jackc/pgx) ``` ```{sd-item} A pure Go driver and toolkit for PostgreSQL. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/jackc/pgx?label=latest)](https://github.com/jackc/pgx) ``` ::: :::{sd-row} ```{sd-item} Java ``` ```{sd-item} [PostgreSQL JDBC](https://jdbc.postgresql.org/) ``` ```{sd-item} The official PostgreSQL JDBC Driver. For connecting to CrateDB from any environment that supports it. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/pgjdbc/pgjdbc?label=latest)](https://github.com/pgjdbc/pgjdbc) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#java) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/java-jdbc) ``` ::: :::{sd-row} ```{sd-item} Java ``` ```{sd-item} [CrateDB PgJDBC fork](https://cratedb.com/docs/jdbc/) ``` ```{sd-item} For connecting to CrateDB with specialized type system support and other tweaks. Ignores the `ROLLBACK` statement and the `hstore` and `jsonb` extensions. ``` ```{sd-item} [![](https://img.shields.io/maven-central/v/io.crate/crate-jdbc?label=latest)](https://github.com/crate/pgjdbc) ``` ::: :::{sd-row} ```{sd-item} Node.js ``` ```{sd-item} [node-postgres](https://node-postgres.com/) ``` ```{sd-item} A collection of Node.js modules for interfacing with a PostgreSQL database using JavaScript or TypeScript. Has support for callbacks, promises, async/await, connection pooling, prepared statements, cursors, streaming results, C/C++ bindings, rich type parsing, and more. ``` ```{sd-item} [![](https://img.shields.io/npm/v/pg?label=latest&color=blue)](https://github.com/brianc/node-postgres) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#javascript) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [PDO_PGSQL](https://www.php.net/manual/en/ref.pdo-pgsql.php) ``` ```{sd-item} For connecting to CrateDB from PHP, supporting its PDO interface. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/php/php-src?label=latest)](https://github.com/php/php-src/tree/master/ext/pdo_pgsql) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/php-pdo) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [AMPHP](https://amphp.org/) ``` ```{sd-item} For connecting to CrateDB using AMPHP, an Async PostgreSQL client for PHP. AMPHP is a collection of high-quality, event-driven libraries for PHP designed with fibers and concurrency in mind. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/amphp/postgres?label=latest)](https://github.com/amphp/postgres) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/php-amphp) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [aoipg](https://github.com/aio-libs/aiopg) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio` (PEP-3156/tulip) framework. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/aio-libs/aiopg?label=latest)](https://github.com/aio-libs/aiopg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#aiopg) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [asyncpg](https://github.com/MagicStack/asyncpg) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio`. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/MagicStack/asyncpg?label=latest)](https://github.com/MagicStack/asyncpg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#psycopg2) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [psycopg3](https://www.psycopg.org/psycopg3/docs/) ``` ```{sd-item} For connecting to CrateDB from Python, supporting Python's `asyncio`. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/psycopg/psycopg?label=latest)](https://github.com/psycopg/psycopg) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#psycopg3) ``` ::: :::: ### HTTP The drivers listed in this section all use the [CrateDB HTTP interface]. ::::{sd-table} :widths: 2 3 5 2 :row-class: top-border :::{sd-row} ```{sd-item} ``` ```{sd-item} **Driver/Adapter** ``` ```{sd-item} **Description** ``` ```{sd-item} **Info** ``` ::: :::{sd-row} ```{sd-item} MicroPython ``` ```{sd-item} [micropython-cratedb](https://github.com/crate/micropython-cratedb) ``` ```{sd-item} A MicroPython library connecting to the CrateDB HTTP API. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/micropython-cratedb?label=latest)](https://github.com/crate/micropython-cratedb) ``` ::: :::{sd-row} ```{sd-item} Node.js ``` ```{sd-item} [node-crate](https://www.npmjs.com/package/node-crate) ``` ```{sd-item} A JavaScript library connecting to the CrateDB HTTP API. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/megastef/node-crate?label=latest)](https://github.com/megastef/node-crate) [![](https://img.shields.io/badge/example-application-darkcyan)](https://github.com/crate/devrel-shipping-forecast-geo-demo) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [CrateDB PDO driver](https://cratedb.com/docs/pdo/) ``` ```{sd-item} For connecting to CrateDB from PHP. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-pdo?label=latest)](https://github.com/crate/crate-pdo) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#php) ``` ::: :::{sd-row} ```{sd-item} PHP ``` ```{sd-item} [CrateDB DBAL adapter](https://cratedb.com/docs/dbal/) ``` ```{sd-item} For connecting to CrateDB from PHP, using DBAL and Doctrine. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-dbal?label=latest)](https://github.com/crate/crate-dbal) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#php) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [CrateDB Python driver](https://cratedb.com/docs/python/) ``` ```{sd-item} For connecting to CrateDB from Python. Has support for [CrateDB BLOBs]. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate-python?label=latest)](https://github.com/crate/crate-python) [![](https://img.shields.io/badge/docs-by%20example-darkgreen)][python-dbapi-by-example] [![](https://img.shields.io/badge/example-snippet-darkcyan)](#crate-python) ``` ::: :::{sd-row} ```{sd-item} Python ``` ```{sd-item} [SQLAlchemy dialect](https://cratedb.com/docs/sqlalchemy-cratedb/) ``` ```{sd-item} For connecting to CrateDB from Python, using SQLAlchemy. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/sqlalchemy-cratedb?label=latest)](https://github.com/crate/sqlalchemy-cratedb) [![](https://img.shields.io/badge/docs-by%20example-darkgreen)][python-sqlalchemy-by-example] [![](https://img.shields.io/badge/example-snippet-darkcyan)](#sqlalchemy-cratedb) ``` ::: :::{sd-row} ```{sd-item} Ruby ``` ```{sd-item} [CrateDB Ruby driver](https://github.com/crate/crate_ruby) ``` ```{sd-item} A Ruby client library for CrateDB. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/crate_ruby?label=latest)](https://github.com/crate/crate_ruby) [![](https://img.shields.io/badge/example-snippet-darkcyan)](#ruby) [![](https://img.shields.io/badge/example-runnable-darkcyan)](https://github.com/crate/cratedb-examples/tree/main/by-language/ruby) ``` ::: :::{sd-row} ```{sd-item} Ruby ``` ```{sd-item} [CrateDB ActiveRecord adapter](https://github.com/crate/activerecord-crate-adapter) ``` ```{sd-item} Ruby on Rails ActiveRecord adapter for CrateDB. ``` ```{sd-item} [![](https://img.shields.io/github/v/tag/crate/activerecord-crate-adapter?label=latest)](https://github.com/crate/activerecord-crate-adapter) ``` ::: :::: ```{tip} Please visit the [](#build-status) page for an overview about the integration status of the client drivers listed above, and more. ``` ```{toctree} :maxdepth: 1 :hidden: java javascript php python ruby ``` [ADBC]: https://arrow.apache.org/docs/format/ADBC.html [Authentication]: inv:crate-reference:*:label#admin_auth [CrateDB BLOBs]: inv:crate-reference:*:label#blob_support [CrateDB bulk operations]: inv:crate-reference:*:label#http-bulk-ops [CrateDB HTTP interface]: inv:crate-reference:*:label#interface-http [CrateDB PostgreSQL interface]: inv:crate-reference:*:label#interface-postgresql [HTTP protocol]: https://en.wikipedia.org/wiki/HTTP [JDBC]: https://en.wikipedia.org/wiki/Java_Database_Connectivity [ODBC]: https://en.wikipedia.org/wiki/Open_Database_Connectivity [PostgreSQL wire protocol]: https://www.postgresql.org/docs/current/protocol.html [python-dbapi-by-example]: inv:crate-python:*:label#by-example [python-sqlalchemy-by-example]: inv:sqlalchemy-cratedb:*:label#by-example [schema]: inv:crate-reference:*:label#ddl-create-table-schemas [schemas]: inv:crate-reference:*:label#ddl-create-table-schemas [superuser]: inv:crate-reference:*:label#administration_user_management
.. _index: ##################### CrateDB Python Client ##################### .. rubric:: Table of contents .. contents:: :local: :depth: 1 ************ Introduction ************ The Python client library for `CrateDB`_ implements the Python Database API Specification v2.0 (`PEP 249`_). The Python driver can be used to connect to both `CrateDB`_ and `CrateDB Cloud`_, and is verified to work on Linux, macOS, and Windows. It is used by the `Crash CLI`_, as well as other libraries and applications connecting to CrateDB from the Python ecosystem. It is verified to work with CPython, but it has also been tested successfully with `PyPy`_. Please make sure to also visit the section about :ref:`other-options`, using the :ref:`crate-reference:interface-postgresql` interface of `CrateDB`_. ************* Documentation ************* For general help about the Python Database API, please consult `PEP 249`_. For more detailed information about how to install the client driver, how to connect to a CrateDB cluster, and how to run queries, consult the resources referenced below. .. toctree:: :titlesonly: getting-started connect query blobs DB API ====== Install package from PyPI. .. code-block:: shell pip install crate Connect to CrateDB instance running on ``localhost``. .. code-block:: python # Connect using DB API. from crate import client from pprint import pp query = "SELECT country, mountain, coordinates, height FROM sys.summits ORDER BY country;" with client.connect("localhost:4200", username="crate") as connection: cursor = connection.cursor() cursor.execute(query) pp(cursor.fetchall()) cursor.close() Connect to `CrateDB Cloud`_. .. code-block:: python # Connect using DB API. from crate import client connection = client.connect( servers="https://example.aks1.westeurope.azure.cratedb.net:4200", username="admin", password="") Data types ========== The DB API driver supports :ref:`CrateDB's data types ` to different degrees. For more information, please consult the :ref:`data-types` documentation page. .. toctree:: :maxdepth: 2 data-types Migration Notes =============== The :ref:`CrateDB dialect ` for `SQLAlchemy`_ is provided by the `sqlalchemy-cratedb`_ package. If you are migrating from previous versions of ``crate[sqlalchemy]<1.0.0``, you will find that the newer releases ``crate>=1.0.0`` no longer include the SQLAlchemy dialect for CrateDB. See `migrate to sqlalchemy-cratedb`_ for relevant guidelines about how to successfully migrate to the `sqlalchemy-cratedb`_ package. Examples ======== - The :ref:`by-example` section enumerates concise examples demonstrating the different API interfaces of the CrateDB Python client library. Those are DB API, HTTP, and BLOB interfaces. - Executable code examples are maintained within the `cratedb-examples repository`_. `sqlalchemy-cratedb`_, `python-dataframe-examples`_, and `python-sqlalchemy-examples`_ provide relevant code snippets about how to connect to CrateDB using `SQLAlchemy`_, `pandas`_, or `Dask`_, and how to load and export data. - The `sample application`_ and the corresponding `sample application documentation`_ demonstrate the use of the driver on behalf of an example "guestbook" application, using Flask. .. toctree:: :maxdepth: 2 by-example/index ******************* Project information ******************* Resources ========= - `Source code `_ - `Documentation `_ - `Python Package Index (PyPI) `_ Contributions ============= The CrateDB Python client library is an open source project, and is `managed on GitHub`_. Every kind of contribution, feedback, or patch, is much welcome. `Create an issue`_ or submit a patch if you think we should include a new feature, or to report or fix a bug. Development =========== In order to setup a development environment on your workstation, please head over to the `development sandbox`_ documentation. When you see the software tests succeed, you should be ready to start hacking. Page index ========== The full index for all documentation pages can be inspected at :ref:`index-all`. License ======= The project is licensed under the terms of the Apache 2.0 license, like `CrateDB itself `_, see `LICENSE`_. .. _Apache Superset: https://github.com/apache/superset .. _Crash CLI: https://crate.io/docs/crate/crash/ .. _CrateDB: https://crate.io/products/cratedb .. _CrateDB Cloud: https://console.cratedb.cloud/ .. _CrateDB source: https://github.com/crate/crate .. _Create an issue: https://github.com/crate/crate-python/issues .. _Dask: https://en.wikipedia.org/wiki/Dask_(software) .. _development sandbox: https://github.com/crate/crate-python/blob/main/DEVELOP.rst .. _cratedb-examples repository: https://github.com/crate/cratedb-examples .. _FIWARE QuantumLeap data historian: https://github.com/orchestracities/ngsi-timeseries-api .. _GeoJSON: https://geojson.org/ .. _GeoJSON geometry objects: https://tools.ietf.org/html/rfc7946#section-3.1 .. _LICENSE: https://github.com/crate/crate-python/blob/main/LICENSE .. _managed on GitHub: https://github.com/crate/crate-python .. _migrate to sqlalchemy-cratedb: https://cratedb.com/docs/sqlalchemy-cratedb/migrate-from-crate-client.html .. _pandas: https://en.wikipedia.org/wiki/Pandas_(software) .. _PEP 249: https://peps.python.org/pep-0249/ .. _PyPy: https://www.pypy.org/ .. _python-dataframe-examples: https://github.com/crate/cratedb-examples/tree/main/by-dataframe .. _python-sqlalchemy-examples: https://github.com/crate/cratedb-examples/tree/main/by-language/python-sqlalchemy .. _sample application: https://github.com/crate/crate-sample-apps/tree/main/python-flask .. _sample application documentation: https://github.com/crate/crate-sample-apps/blob/main/python-flask/documentation.md .. _SQLAlchemy: https://en.wikipedia.org/wiki/Sqlalchemy .. _sqlalchemy-cratedb: https://github.com/crate/sqlalchemy-cratedb .. _Use CrateDB with pandas: https://github.com/crate/crate-qa/pull/246.. _index: ############################## SQLAlchemy dialect for CrateDB ############################## .. rubric:: Table of contents .. contents:: :local: :depth: 1 ***** About ***** The :ref:`CrateDB dialect ` for `SQLAlchemy`_ provides adapters for `CrateDB`_ and SQLAlchemy. The supported versions are 1.3, 1.4, and 2.0. The package is available from `PyPI`_ at `sqlalchemy-cratedb`_. The connector can be used to connect to both `CrateDB`_ and `CrateDB Cloud`_, and is verified to work on Linux, macOS, and Windows. It is used by pandas, Dask, and many other libraries and applications connecting to CrateDB from the Python ecosystem. It is verified to work with CPython, but it has also been tested successfully with `PyPy`_. .. note:: If you are upgrading from ``crate[sqlalchemy]`` to ``sqlalchemy-cratedb``, please read this section carefully. .. toctree:: :titlesonly: migrate-from-crate-client ************ Introduction ************ Please consult the `SQLAlchemy tutorial`_, and the general `SQLAlchemy documentation`_. For more detailed information about how to install the dialect package, how to connect to a CrateDB cluster, and how to run queries, consult the resources referenced below. ************ Installation ************ Install package from PyPI. .. code-block:: shell pip install --upgrade sqlalchemy-cratedb More installation details can be found over here. .. toctree:: :titlesonly: install .. _features: ******** Features ******** The CrateDB dialect for `SQLAlchemy`_ offers convenient ORM access and supports CrateDB's container data types ``OBJECT`` and ``ARRAY``, its vector data type ``FLOAT_VECTOR``, and geospatial data types using `GeoJSON`_, supporting different kinds of `GeoJSON geometry objects`_. .. toctree:: :maxdepth: 2 overview .. _synopsis: Synopsis ======== Connect to CrateDB instance running on ``localhost``. .. code-block:: python # Connect using SQLAlchemy Core. import sqlalchemy as sa from pprint import pp dburi = "crate://localhost:4200" query = "SELECT country, mountain, coordinates, height FROM sys.summits ORDER BY country;" engine = sa.create_engine(dburi, echo=True) with engine.connect() as connection: with connection.execute(sa.text(query)) as result: pp(result.mappings().fetchall()) Connect to `CrateDB Cloud`_. .. code-block:: python # Connect using SQLAlchemy Core. import sqlalchemy as sa dburi = "crate://admin:@example.aks1.westeurope.azure.cratedb.net:4200?ssl=true" engine = sa.create_engine(dburi, echo=True) Load results into `pandas`_ DataFrame. .. code-block:: shell pip install pandas .. code-block:: python # Connect using SQLAlchemy Core and pandas. import pandas as pd import sqlalchemy as sa dburi = "crate://localhost:4200" query = "SELECT * FROM sys.summits ORDER BY country;" engine = sa.create_engine(dburi, echo=True) with engine.connect() as connection: df = pd.read_sql(sql=sa.text(query), con=connection) df.info() print(df) Data Types ========== The :ref:`DB API driver ` and the SQLAlchemy dialect support :ref:`CrateDB's data types ` to different degrees. For more information, please consult the :ref:`data-types` and :ref:`SQLAlchemy extension types ` documentation pages. .. toctree:: :maxdepth: 2 :hidden: data-types Support Utilities ================= The package bundles a few support and utility functions that try to fill a few gaps you will observe when working with CrateDB, when compared with other databases. Due to its distributed nature, CrateDB's behavior and features differ from those found in other RDBMS systems. .. toctree:: :maxdepth: 2 support .. _examples: .. _by-example: .. _sqlalchemy-by-example: ******** Examples ******** This section enumerates concise examples demonstrating the use of the SQLAlchemy dialect. .. toctree:: :maxdepth: 1 getting-started crud working-with-types advanced-querying inspection-reflection dataframe .. rubric:: See also - Executable code examples are maintained within the `cratedb-examples repository`_. - `Using CrateDB with pandas, Dask, and Polars`_ has corresponding code snippets about how to connect to CrateDB using popular data frame libraries, and how to load and export data. - The `Apache Superset`_ and `FIWARE QuantumLeap data historian`_ projects. ******************* Project information ******************* Resources ========= - `Source code `_ - `Documentation `_ - `Python Package Index (PyPI) `_ Contributions ============= The SQLAlchemy dialect for CrateDB is an open source project, and is `managed on GitHub`_. Every kind of contribution, feedback, or patch, is much welcome. `Create an issue`_ or submit a patch if you think we should include a new feature, or to report or fix a bug. Development =========== In order to setup a development environment on your workstation, please head over to the `development sandbox`_ documentation. When you see the software tests succeed, you should be ready to start hacking. Page index ========== The full index for all documentation pages can be inspected at :ref:`index-all`. License ======= The project is licensed under the terms of the Apache 2.0 license, like `CrateDB itself `_, see `LICENSE`_. .. _Apache Superset: https://github.com/apache/superset .. _CrateDB: https://cratedb.com/database .. _CrateDB Cloud: https://console.cratedb.cloud/ .. _CrateDB source: https://github.com/crate/crate .. _Create an issue: https://github.com/crate/sqlalchemy-cratedb/issues .. _development sandbox: https://github.com/crate/sqlalchemy-cratedb/blob/main/DEVELOP.md .. _cratedb-examples repository: https://github.com/crate/cratedb-examples/tree/main/by-language .. _FIWARE QuantumLeap data historian: https://github.com/orchestracities/ngsi-timeseries-api .. _GeoJSON: https://geojson.org/ .. _GeoJSON geometry objects: https://tools.ietf.org/html/rfc7946#section-3.1 .. _LICENSE: https://github.com/crate/sqlalchemy-cratedb/blob/main/LICENSE .. _managed on GitHub: https://github.com/crate/sqlalchemy-cratedb .. _pandas: https://pandas.pydata.org/ .. _PEP 249: https://peps.python.org/pep-0249/ .. _PyPI: https://pypi.org/ .. _PyPy: https://www.pypy.org/ .. _SQLAlchemy: https://www.sqlalchemy.org/ .. _SQLAlchemy documentation: https://docs.sqlalchemy.org/ .. _SQLAlchemy tutorial: https://docs.sqlalchemy.org/en/latest/tutorial/ .. _sqlalchemy-cratedb: https://pypi.org/project/sqlalchemy-cratedb/ .. _Using CrateDB with pandas, Dask, and Polars: https://github.com/crate/cratedb-examples/tree/main/by-dataframe# micropython-cratedb - A CrateDB Driver for MicroPython [![Tests](https://github.com/crate/micropython-cratedb/actions/workflows/tests.yml/badge.svg)](https://github.com/crate/micropython-cratedb/actions/workflows/tests.yml) [![Test coverage](https://img.shields.io/codecov/c/gh/crate/micropython-cratedb.svg?style=flat-square)](https://codecov.io/gh/crate/micropython-cratedb/) ## Introduction micropython-cratedb is a [CrateDB](https://cratedb.com) driver for the [MicroPython](https://micropython.org) language. It connects to CrateDB using the [HTTP Endpoint](https://cratedb.com/docs/crate/reference/en/latest/interfaces/http.html). To use this, you'll need a CrateDB database cluster. Sign up for our cloud free tier [here](https://console.cratedb.cloud/) or get started with Docker [here](https://hub.docker.com/_/crate). Want to learn more about CrateDB? Take our free [Fundamentals course](https://learn.cratedb.com/course-overview) at the CrateDB Academy. You can also [watch this video](https://www.youtube.com/watch?v=c4CArvphNeM) from the [Raspberry Pint Meetup](https://www.meetup.com/raspberry-pint-london/) where [Simon Prickett](https://simonprickett.dev), CrateDB's Developer Advocate, demonstrates how to use this driver with various sensors attached to Raspberry Pi Pico W devices. ## Installation There are two ways to install this driver. ### Install with `mpremote` Install the driver with [`mpremote`](https://docs.micropython.org/en/latest/reference/mpremote.html) like this: ```bash mpremote mip install github:crate/micropython-cratedb ``` This will install the driver into `/lib` on the device, along with the [base64](https://github.com/micropython/micropython-lib/tree/master/python-stdlib/base64) module from `micropython-lib`. ### Install with `mip` You can also install the driver into `/lib` on the device by running the following commands at the MicroPython REPL on the device: ```python import network import mip wlan = network.WLAN(network.STA_IF) wlan.active(True) wlan.connect("", "") wlan.isconnected() # Run this until it returns True mip.install("github:crate/micropython-cratedb") ``` ## Using the Driver in a MicroPython Script Import the driver like this: ```python import cratedb ``` ### Connecting to CrateDB Connect to a CrateDB Cloud cluster using SSL, by providing hostname, username, and password: ```python crate = cratedb.CrateDB( host="host", user="user", password="password" ) ``` The driver uses SSL by default. If you're running CrateDB on your workstation (with Docker for example, by using `docker run --rm -it --publish=4200:4200 crate`), connect like this: ```python crate = cratedb.CrateDB( host="hostname", use_ssl=False ) ``` The driver will connect to port 4200 unless you provide an alternative value: ```python crate = cratedb.CrateDB( host="host", user="user", port=4201, password="password" ) ``` ### Interacting with CrateDB CrateDB is a SQL database: you'll store, update and retieve data using SQL statements. The examples that follow assume a table schema that looks like this: ```sql CREATE TABLE temp_humidity ( sensor_id TEXT, ts TIMESTAMP WITH TIME ZONE GENERATED ALWAYS AS current_timestamp, temp DOUBLE PRECISION, humidity DOUBLE PRECISION ); ``` Assume that the table contains a few sample rows. #### Retrieving Data The `execute` method sends a SQL statement to the database for execution, and returns the result: ```python response = crate.execute( "SELECT sensor_id, ts, temp, humidity FROM temp_humidity ORDER BY ts DESC" ) ``` You can also use parameterized queries: ```python response = crate.execute( """ SELECT sensor_id, ts, temp, humidity FROM temp_humidity WHERE sensor_id = ? ORDER BY ts DESC """, [ "a01" ] ) ``` Data is returned as a dictionary that looks like this: ```python { 'rows': [ ['a01', 1728473302619, 22.8, 59.1], ['a02', 1728473260880, 3.3, 12.9], ['a02', 1728473251188, 3.2, 12.7], ['a03', 1728473237365, 28.4, 65.7], ['a01', 1728473223332, 22.3, 58.6] ], 'rowcount': 5, 'cols': [ 'sensor_id', 'ts', 'temp', 'humidity' ], 'duration': 18.11329 } ``` Use the `with_types` parameter to have CrateDB return information about the data type of each column in the resultset. This feature is off by defaault to minimize network bandwidth. ```python response = crate.execute( "SELECT sensor_id, ts, temp FROM temp_humidity WHERE sensor_id = ? ORDER BY ts DESC", [ "a01" ], with_types=True ) ``` The resultset then contains an extra key, `col_types`: ```python { 'col_types': [ 4, 11, 6 ], 'cols': [ 'sensor_id', 'ts', 'temp' ], 'rowcount': 2, 'rows': [ ['a01', 1728473302619, 22.8], ['a01', 1728473223332, 22.3] ], 'duration': 7.936583 } ``` Constants are provided for each type. For example type `11` is `CRATEDB_TYPE_TIMESTAMP_WITH_TIME_ZONE`. #### Inserting / Updating Data Here's an example insert statement: ```python response = crate.execute( "INSERT INTO temp_humidity (sensor_id, temp, humidity) VALUES (?, ?, ?)", [ "a01", 22.8, 60.1 ] ) ``` The response from CrateDB looks like this: ```python { 'rows': [ [] ], 'rowcount': 1, 'cols': [], 'duration': 38.615707 } ``` If you don't need a response, set the `return_response` parameter to `False` (default is `True`). This will save a small amount of time that the driver normally spends on processing the response. ```python response = crate.execute( "INSERT INTO temp_humidity (sensor_id, temp, humidity) VALUES (?, ?, ?)", [ "a01", 22.9, 60.3 ], return_response=False ) ``` `response` will be `None`. You can add multiple records in a single network round trip using a bulk insert: ```python response = crate.execute( "INSERT INTO temp_humidity (sensor_id, temp, humidity) VALUES (?, ?, ?)", [ [ "a01", 22.7, 60.1 ], [ "a02", 3.3, 12.9 ] ] ) ``` The response looks like this, note that you can expect to receive multiple results each containing their own `rowcount`: ```python { 'results': [ { 'rowcount': 1 }, { 'rowcount': 1 } ], 'cols': [], 'duration': 32.546875 } ``` Existing rows can also be updated: ```python response = crate.execute( "UPDATE temp_humidity SET sensor_id = ? WHERE sensor_id = ?", [ "a04", "a01" ] ) ``` The response includes the number of rows affected by the update: ```python { 'rows': [ [] ], 'rowcount': 5, 'cols': [], 'duration': 696.36975 } ``` #### Working with Objects and Arrays CrateDB supports flexible storage and indexing of objects / JSON data. To learn more about this, check out our [blog post](https://cratedb.com/blog/handling-dynamic-objects-in-cratedb) that explains the different ways objects can be stored. Here are some basic examples showing how to store objects with micropython-cratedb and retrieve desired fields from them. Assume a table with the following definition having a [dynamic object](https://cratedb.com/blog/handling-dynamic-objects-in-cratedb) column: ```sql CREATE TABLE driver_object_test ( id TEXT PRIMARY KEY, data OBJECT(DYNAMIC) ) ``` Objects of arbitrary structure are inserted like this: ```python response = crate.execute( "INSERT INTO driver_object_test (id, data) VALUES (?, ?)", [ "2cae54", { "sensor_readings": { "temp": 23.3, "humidity": 61.2 }, "metadata": { "software_version": "1.19", "battery_percentage": 57, "uptime": 2851200 } } ] ) ``` And values contained in objects can be retrieved selectively like this: ```python response = crate.execute( """SELECT id, data['metadata']['uptime'] AS uptime, data['sensor_readings'] AS sensor_readings FROM driver_object_test WHERE id = ?""", [ "2cae54" ] ) ``` `response` contains the matching records like this: ```python { 'rows': [ [2851200, {'humidity': 61.2, 'temp': 23.3}] ], 'rowcount': 1, 'cols': [ 'uptime', 'sensor_readings' ], 'duration': 4.047666 } ``` For more examples, see the [`object_examples.py`](examples/object_examples.py) script in the `examples` folder. #### Deleting Data Delete queries work like any other SQL statement: ```python response = crate.execute( "DELETE FROM temp_humidity WHERE sensor_id = ?", [ "a02" ] ) ``` And the response from the above looks like this, again including the number of rows affected: ```python { 'rows': [ [] ], 'rowcount': 3, 'cols': [], 'duration': 66.81604 } ``` #### Errors / Exceptions The driver can throw the following types of exception: * `NetworkError`: when there is a network level issue, for example the hostname cannot be resolved. * `CrateDBError`: errors returned by the CrateDB cluster, for example when invalid SQL is submitted. Here's an example showing how to catch a network error: ```python crate = cratedb.CrateDB("nonexist", use_ssl=False) try: response = crate.execute( "SELECT sensor_id, ts, temp FROM temp_humidity WHERE sensor_id = ? ORDER BY ts DESC", [ "a01" ], with_types=True ) except cratedb.NetworkError as e: print("Network error:") print(e) ``` Output: ```python Network error: [addrinfo error 8] ``` This example shows a `CrateDBError`: ```python try: response = crate.execute( "SELECT nonexist FROM temp_humidity" ) except cratedb.CrateDBError as e: print("CrateDB error:") print(e) ``` Output: ```python CrateDB error: { 'error': { 'message': 'ColumnUnknownException[Column nonexist unknown]', 'code': 4043 } } ``` Constants for each value of `code` are provided. For example `4043` is `CRATEDB_ERROR_UNKNOWN_COLUMN `. ## Examples The [`examples`](examples/) folder contains example MicroPython scripts, some of which are for specific microcontroller boards, including the popular Raspberry Pi Pico W. Hardware-independent example programs also work well on CPython, and the MicroPython UNIX and Windows port, see [Running on CPython](./docs/cpython.md) and [Running on MicroPython](./docs/micropython.md). ## Testing This driver library has been tested using the following MicroPython versions: * **1.24.0** * macOS/darwin ([install with Homebrew package manager](https://formulae.brew.sh/formula/micropython)) * Raspberry Pi Pico W ([download](https://micropython.org/download/RPI_PICO_W/)) * **1.23.0** * macOS/darwin ([install with Homebrew package manager](https://formulae.brew.sh/formula/micropython)) * Raspberry Pi Pico W ([download](https://micropython.org/download/RPI_PICO_W/)) * **1.23.0 (Pimoroni build)** * Raspberry Pi Pico W ([download](https://github.com/pimoroni/pimoroni-pico/releases)) If you have other microcontroller boards that you can test the driver with or provide examples for, we'd love to receive a [pull request](/pulls)! ## Need Help? If you need help, have a bug report or feature request, or just want to show us your project that uses this driver then we'd love to hear from you! For bugs or feature requests, please raise an [issue](/issues) on GitHub. We also welcome [pull requests](/pulls)! If you have a project to share with us, or a more general question about this driver or CrateDB, please post in our [community forum](https://community.cratedb.com/)... currentmodule:: psycopg .. _module-usage: Basic module usage ================== The basic Psycopg usage is common to all the database adapters implementing the `DB-API`__ protocol. Other database adapters, such as the builtin `sqlite3` or `psycopg2`, have roughly the same pattern of interaction. .. __: https://www.python.org/dev/peps/pep-0249/ .. index:: pair: Example; Usage .. _usage: Main objects in Psycopg 3 ------------------------- Here is an interactive session showing some of the basic commands: .. code:: python # Note: the module name is psycopg, not psycopg3 import psycopg # Connect to an existing database with psycopg.connect("dbname=test user=postgres") as conn: # Open a cursor to perform database operations with conn.cursor() as cur: # Execute a command: this creates a new table cur.execute(""" CREATE TABLE test ( id serial PRIMARY KEY, num integer, data text) """) # Pass data to fill a query placeholders and let Psycopg perform # the correct conversion (no SQL injections!) cur.execute( "INSERT INTO test (num, data) VALUES (%s, %s)", (100, "abc'def")) # Query the database and obtain data as Python objects. cur.execute("SELECT * FROM test") print(cur.fetchone()) # will print (1, 100, "abc'def") # You can use `cur.executemany()` to perform an operation in batch cur.executemany( "INSERT INTO test (num) values (%s)", [(33,), (66,), (99,)]) # You can use `cur.fetchmany()`, `cur.fetchall()` to return a list # of several records, or even iterate on the cursor cur.execute("SELECT id, num FROM test order by num") for record in cur: print(record) # Make the changes to the database persistent conn.commit() In the example you can see some of the main objects and methods and how they relate to each other: - The function `~Connection.connect()` creates a new database session and returns a new `Connection` instance. `AsyncConnection.connect()` creates an `asyncio` connection instead. - The `~Connection` class encapsulates a database session. It allows to: - create new `~Cursor` instances using the `~Connection.cursor()` method to execute database commands and queries, - terminate transactions using the methods `~Connection.commit()` or `~Connection.rollback()`. - The class `~Cursor` allows interaction with the database: - send commands to the database using methods such as `~Cursor.execute()` and `~Cursor.executemany()`, - retrieve data from the database, iterating on the cursor or using methods such as `~Cursor.fetchone()`, `~Cursor.fetchmany()`, `~Cursor.fetchall()`. - Using these objects as context managers (i.e. using `!with`) will make sure to close them and free their resources at the end of the block (notice that :ref:`this is different from psycopg2 `). .. seealso:: A few important topics you will have to deal with are: - :ref:`query-parameters`. - :ref:`types-adaptation`. - :ref:`transactions`. Shortcuts --------- The pattern above is familiar to `!psycopg2` users. However, Psycopg 3 also exposes a few simple extensions which make the above pattern leaner: - the `Connection` objects exposes an `~Connection.execute()` method, equivalent to creating a cursor, calling its `~Cursor.execute()` method, and returning it. .. code:: # In Psycopg 2 cur = conn.cursor() cur.execute(...) # In Psycopg 3 cur = conn.execute(...) - The `Cursor.execute()` method returns `!self`. This means that you can chain a fetch operation, such as `~Cursor.fetchone()`, to the `!execute()` call: .. code:: # In Psycopg 2 cur.execute(...) record = cur.fetchone() cur.execute(...) for record in cur: ... # In Psycopg 3 record = cur.execute(...).fetchone() for record in cur.execute(...): ... Using them together, in simple cases, you can go from creating a connection to using a result in a single expression: .. code:: print(psycopg.connect(DSN).execute("SELECT now()").fetchone()[0]) # 2042-07-12 18:15:10.706497+01:00 .. index:: pair: Connection; `!with` .. _with-connection: Connection context ------------------ Psycopg 3 `Connection` can be used as a context manager: .. code:: python with psycopg.connect() as conn: ... # use the connection # the connection is now closed When the block is exited, if there is a transaction open, it will be committed. If an exception is raised within the block the transaction is rolled back. In both cases the connection is closed. It is roughly the equivalent of: .. code:: python conn = psycopg.connect() try: ... # use the connection except BaseException: conn.rollback() else: conn.commit() finally: conn.close() .. note:: This behaviour is not what `!psycopg2` does: in `!psycopg2` :ref:`there is no final close() ` and the connection can be used in several `!with` statements to manage different transactions. This behaviour has been considered non-standard and surprising so it has been replaced by the more explicit `~Connection.transaction()` block. Note that, while the above pattern is what most people would use, `connect()` doesn't enter a block itself, but returns an "un-entered" connection, so that it is still possible to use a connection regardless of the code scope and the developer is free to use (and responsible for calling) `~Connection.commit()`, `~Connection.rollback()`, `~Connection.close()` as and where needed. .. warning:: If a connection is just left to go out of scope, the way it will behave with or without the use of a `!with` block is different: - if the connection is used without a `!with` block, the server will find a connection closed INTRANS and roll back the current transaction; - if the connection is used with a `!with` block, there will be an explicit COMMIT and the operations will be finalised. You should use a `!with` block when your intention is just to execute a set of operations and then committing the result, which is the most usual thing to do with a connection. If your connection life cycle and transaction pattern is different, and want more control on it, the use without `!with` might be more convenient. See :ref:`transactions` for more information. `AsyncConnection` can be also used as context manager, using ``async with``, but be careful about its quirkiness: see :ref:`async-with` for details. Adapting psycopg to your program -------------------------------- The above :ref:`pattern of use ` only shows the default behaviour of the adapter. Psycopg can be customised in several ways, to allow the smoothest integration between your Python program and your PostgreSQL database: - If your program is concurrent and based on `asyncio` instead of on threads/processes, you can use :ref:`async connections and cursors `. - If you want to customise the objects that the cursor returns, instead of receiving tuples, you can specify your :ref:`row factories `. - If you want to customise how Python values and PostgreSQL types are mapped into each other, beside the :ref:`basic type mapping `, you can :ref:`configure your types `. .. _logging: Connection logging ------------------ Psycopg uses the stdlib `logging` module to report the operations happening at connection time. If you experience slowness or random failures on connection you can set the ``psycopg`` logger at ``DEBUG`` level to read the operations performed. A very simple example of logging configuration may be the following: .. code:: python import logging import psycopg logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s") logging.getLogger("psycopg").setLevel(logging.DEBUG) psycopg.connect("host=192.0.2.1,localhost connect_timeout=10") In this example Psycopg will first try to connect to a non responsive server, only stopping after hitting the timeout, and will move on to a working server. The resulting log might look like: .. code:: text 2045-05-10 11:45:54,364 DEBUG connection attempt: host: '192.0.2.1', port: None, hostaddr: '192.0.2.1' 2045-05-10 11:45:54,365 DEBUG connection started: 2045-05-10 11:45:54,365 DEBUG connection polled: 2045-05-10 11:46:04,392 DEBUG connection failed: host: '192.0.2.1', port: None, hostaddr: '192.0.2.1': connection timeout expired 2045-05-10 11:46:04,392 DEBUG connection attempt: host: 'localhost', port: None, hostaddr: '127.0.0.1' 2045-05-10 11:46:04,393 DEBUG connection started: 2045-05-10 11:46:04,394 DEBUG connection polled: 2045-05-10 11:46:04,394 DEBUG connection polled: 2045-05-10 11:46:04,411 DEBUG connection polled: 2045-05-10 11:46:04,413 DEBUG connection polled: 2045-05-10 11:46:04,423 DEBUG connection polled: 2045-05-10 11:46:04,424 DEBUG connection polled: 2045-05-10 11:46:04,426 DEBUG connection polled: 2045-05-10 11:46:04,426 DEBUG connection succeeded: host: 'localhost', port: None, hostaddr: '127.0.0.1' Please note that a connection attempt might try to reach different servers: either explicitly because the connection string specifies `multiple hosts`__, or implicitly, because the DNS resolves the host name to multiple IPs. .. __: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-MULTIPLE-HOSTS--- title: Welcome slug: / --- import { Logo } from '/components/logo.tsx' node-postgres is a collection of node.js modules for interfacing with your PostgreSQL database. It has support for callbacks, promises, async/await, connection pooling, prepared statements, cursors, streaming results, C/C++ bindings, rich type parsing, and more! Just like PostgreSQL itself there are a lot of features: this documentation aims to get you up and running quickly and in the right direction. It also tries to provide guides for more advanced & edge-case topics allowing you to tap into the full power of PostgreSQL from node.js. ## Install ```bash $ npm install pg ``` ## Supporters node-postgres continued development and support is made possible by the many [supporters](https://github.com/brianc/node-postgres/blob/master/SPONSORS.md). Special thanks to [Medplum](https://www.medplum.com/) for sponsoring node-postgres for a whole year! Medplum If you or your company would like to sponsor node-postgres stop by [GitHub Sponsors](https://github.com/sponsors/brianc) and sign up or feel free to [email me](mailto:brian@pecanware.com) if you want to add your logo to the documentation or discuss higher tiers of sponsorship! # Version compatibility node-postgres strives to be compatible with all recent LTS versions of node & the most recent "stable" version. At the time of this writing node-postgres is compatible with node 18.x, 20.x, 22.x, and 24.x. ## Getting started The simplest possible way to connect, query, and disconnect is with async/await: ```js import { Client } from 'pg' const client = new Client() await client.connect() const res = await client.query('SELECT $1::text as message', ['Hello world!']) console.log(res.rows[0].message) // Hello world! await client.end() ``` ### Error Handling For the sake of simplicity, these docs will assume that the methods are successful. In real life use, make sure to properly handle errors thrown in the methods. A `try/catch` block is a great way to do so: ```ts import { Client } from 'pg' const client = new Client() await client.connect() try { const res = await client.query('SELECT $1::text as message', ['Hello world!']) console.log(res.rows[0].message) // Hello world! } catch (err) { console.error(err); } finally { await client.end() } ``` ### Pooling In most applications you'll wannt to use a [connection pool](/features/pooling) to manage your connections. This is a more advanced topic, but here's a simple example of how to use it: ```js import { Pool } from 'pg' const pool = new Pool() const res = await pool.query('SELECT $1::text as message', ['Hello world!']) console.log(res.rows[0].message) // Hello world! ``` Our real-world apps are almost always more complicated than that, and I urge you to read on! --- title: "Documentation" date: 2022-06-19T22:46:55+05:30 draft: false aliases: - "/documentation/head/index.html" --- Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database. It is part of the Java Standard Edition platform and provides methods to query and update data in a database, and is oriented towards relational databases. PostgreSQL® JDBC Driver (pgJDBC for short) allows Java programs to connect to a PostgreSQL® database using standard, database independent Java code. Is an open source JDBC driver written in Pure Java (Type 4), and communicates in the PostgreSQL® native network protocol. Because of this, the driver is platform independent; once compiled, the driver can be used on any system. The current version of the driver should be compatible with PostgreSQL® 8.2 and higher using the version 3.0 of the PostgreSQL® protocol, and it's compatible with Java 8 (JDBC 4.2) and above. This manual is not intended as a complete guide to JDBC programming, but should help to get you started. For more information refer to the standard JDBC API documentation. Also, take a look at the examples included with the source.[![Go Reference](https://pkg.go.dev/badge/github.com/jackc/pgx/v5.svg)](https://pkg.go.dev/github.com/jackc/pgx/v5) [![Build Status](https://github.com/jackc/pgx/actions/workflows/ci.yml/badge.svg)](https://github.com/jackc/pgx/actions/workflows/ci.yml) # pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. The pgx driver is a low-level, high performance interface that exposes PostgreSQL-specific features such as `LISTEN` / `NOTIFY` and `COPY`. It also includes an adapter for the standard `database/sql` interface. The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers, proxies, load balancers, logical replication clients, etc. ## Example Usage ```go package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" ) func main() { // urlExample := "postgres://username:password@localhost:5432/database_name" conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL")) if err != nil { fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) os.Exit(1) } defer conn.Close(context.Background()) var name string var weight int64 err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight) if err != nil { fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err) os.Exit(1) } fmt.Println(name, weight) } ``` See the [getting started guide](https://github.com/jackc/pgx/wiki/Getting-started-with-pgx) for more information. ## Features * Support for approximately 70 different PostgreSQL types * Automatic statement preparation and caching * Batch queries * Single-round trip query mode * Full TLS connection control * Binary format support for custom types (allows for much quicker encoding/decoding) * `COPY` protocol support for faster bulk data loads * Tracing and logging support * Connection pool with after-connect hook for arbitrary connection setup * `LISTEN` / `NOTIFY` * Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings * `hstore` support * `json` and `jsonb` support * Maps `inet` and `cidr` PostgreSQL types to `netip.Addr` and `netip.Prefix` * Large object support * NULL mapping to pointer to pointer * Supports `database/sql.Scanner` and `database/sql/driver.Valuer` interfaces for custom types * Notice response handling * Simulated nested transactions with savepoints ## Choosing Between the pgx and database/sql Interfaces The pgx interface is faster. Many PostgreSQL specific features such as `LISTEN` / `NOTIFY` and `COPY` are not available through the `database/sql` interface. The pgx interface is recommended when: 1. The application only targets PostgreSQL. 2. No other libraries that require `database/sql` are in use. It is also possible to use the `database/sql` interface and convert a connection to the lower-level pgx interface as needed. ## Testing See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions. ## Architecture See the presentation at Golang Estonia, [PGX Top to Bottom](https://www.youtube.com/watch?v=sXMSWhcHCf8) for a description of pgx architecture. ## Supported Go and PostgreSQL Versions pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For [Go](https://golang.org/doc/devel/release.html#policy) that is the two most recent major releases and for [PostgreSQL](https://www.postgresql.org/support/versioning/) the major releases in the last 5 years. This means pgx supports Go 1.23 and higher and PostgreSQL 13 and higher. pgx also is tested against the latest version of [CockroachDB](https://www.cockroachlabs.com/product/). ## Version Policy pgx follows semantic versioning for the documented public API on stable releases. `v5` is the latest stable major version. ## PGX Family Libraries ### [github.com/jackc/pglogrepl](https://github.com/jackc/pglogrepl) pglogrepl provides functionality to act as a client for PostgreSQL logical replication. ### [github.com/jackc/pgmock](https://github.com/jackc/pgmock) pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler). ### [github.com/jackc/tern](https://github.com/jackc/tern) tern is a stand-alone SQL migration system. ### [github.com/jackc/pgerrcode](https://github.com/jackc/pgerrcode) pgerrcode contains constants for the PostgreSQL error codes. ## Adapters for 3rd Party Types * [github.com/jackc/pgx-gofrs-uuid](https://github.com/jackc/pgx-gofrs-uuid) * [github.com/jackc/pgx-shopspring-decimal](https://github.com/jackc/pgx-shopspring-decimal) * [github.com/twpayne/pgx-geos](https://github.com/twpayne/pgx-geos) ([PostGIS](https://postgis.net/) and [GEOS](https://libgeos.org/) via [go-geos](https://github.com/twpayne/go-geos)) * [github.com/vgarvardt/pgx-google-uuid](https://github.com/vgarvardt/pgx-google-uuid) ## Adapters for 3rd Party Tracers * [github.com/jackhopner/pgx-xray-tracer](https://github.com/jackhopner/pgx-xray-tracer) ## Adapters for 3rd Party Loggers These adapters can be used with the tracelog package. * [github.com/jackc/pgx-go-kit-log](https://github.com/jackc/pgx-go-kit-log) * [github.com/jackc/pgx-log15](https://github.com/jackc/pgx-log15) * [github.com/jackc/pgx-logrus](https://github.com/jackc/pgx-logrus) * [github.com/jackc/pgx-zap](https://github.com/jackc/pgx-zap) * [github.com/jackc/pgx-zerolog](https://github.com/jackc/pgx-zerolog) * [github.com/mcosta74/pgx-slog](https://github.com/mcosta74/pgx-slog) * [github.com/kataras/pgx-golog](https://github.com/kataras/pgx-golog) ## 3rd Party Libraries with PGX Support ### [github.com/pashagolub/pgxmock](https://github.com/pashagolub/pgxmock) pgxmock is a mock library implementing pgx interfaces. pgxmock has one and only purpose - to simulate pgx behavior in tests, without needing a real database connection. ### [github.com/georgysavva/scany](https://github.com/georgysavva/scany) Library for scanning data from a database into Go structs and more. ### [github.com/vingarcia/ksql](https://github.com/vingarcia/ksql) A carefully designed SQL client for making using SQL easier, more productive, and less error-prone on Golang. ### [github.com/otan/gopgkrb5](https://github.com/otan/gopgkrb5) Adds GSSAPI / Kerberos authentication support. ### [github.com/wcamarao/pmx](https://github.com/wcamarao/pmx) Explicit data mapping and scanning library for Go structs and slices. ### [github.com/stephenafamo/scan](https://github.com/stephenafamo/scan) Type safe and flexible package for scanning database data into Go types. Supports, structs, maps, slices and custom mapping functions. ### [github.com/z0ne-dev/mgx](https://github.com/z0ne-dev/mgx) Code first migration library for native pgx (no database/sql abstraction). ### [github.com/amirsalarsafaei/sqlc-pgx-monitoring](https://github.com/amirsalarsafaei/sqlc-pgx-monitoring) A database monitoring/metrics library for pgx and sqlc. Trace, log and monitor your sqlc query performance using OpenTelemetry. ### [https://github.com/nikolayk812/pgx-outbox](https://github.com/nikolayk812/pgx-outbox) Simple Golang implementation for transactional outbox pattern for PostgreSQL using jackc/pgx driver. ### [https://github.com/Arlandaren/pgxWrappy](https://github.com/Arlandaren/pgxWrappy) Simplifies working with the pgx library, providing convenient scanning of nested structures.# Getting Started [![stable](https://img.shields.io/nuget/v/Npgsql.svg?label=stable)](https://www.nuget.org/packages/Npgsql/) [![next patch](https://img.shields.io/myget/npgsql/v/npgsql.svg?label=next%20patch)](https://www.myget.org/feed/npgsql/package/nuget/Npgsql) [![vnext](https://img.shields.io/myget/npgsql-vnext/v/npgsql.svg?label=vnext)](https://www.myget.org/feed/npgsql-vnext/package/nuget/Npgsql) [![build](https://img.shields.io/github/actions/workflow/status/npgsql/npgsql/build.yml?branch=main)](https://github.com/npgsql/npgsql/actions) The best way to use Npgsql is to install its [nuget package](https://www.nuget.org/packages/Npgsql/). Npgsql aims to be fully ADO.NET-compatible, its API should feel almost identical to other .NET database drivers. Here's a basic code snippet to get you started: ```csharp var connectionString = "Host=myserver;Username=mylogin;Password=mypass;Database=mydatabase"; await using var dataSource = NpgsqlDataSource.Create(connectionString); // Insert some data await using (var cmd = dataSource.CreateCommand("INSERT INTO data (some_field) VALUES ($1)")) { cmd.Parameters.AddWithValue("Hello world"); await cmd.ExecuteNonQueryAsync(); } // Retrieve all rows await using (var cmd = dataSource.CreateCommand("SELECT some_field FROM data")) await using (var reader = await cmd.ExecuteReaderAsync()) { while (await reader.ReadAsync()) { Console.WriteLine(reader.GetString(0)); } } ``` You can find more info about the ADO.NET API in the [MSDN docs](https://msdn.microsoft.com/en-us/library/h43ks021(v=vs.110).aspx) or in many tutorials on the Internet. psqlODBC Configuration Options

psqlODBC Configuration Options

Advanced Options 1/3 Dialog Box

  • DEFAULTS: Press to this button restore the normal defaults for the settings described below.
     
  • Recognize Unique Indexes: Check this option.
     
  • Use Declare/Fetch: If true, the driver automatically uses declare cursor/fetch to handle SELECT statements and keeps 100 rows in a cache. This is mostly a great advantage, especially if you are only interested in reading and not updating. It results in the driver not sucking down lots of memory to buffer the entire result set. If set to false, cursors will not be used and the driver will retrieve the entire result set. For very large tables, this is very inefficient and may use up all the Windows memory/resources. However, it may handle updates better since the tables are not kept open, as they are when using cursors. This was the style of the old podbc32 driver. However, the behavior of the memory allocation is much improved so even when not using cursors, performance should at least be better than the old podbc32.
     
  • CommLog (C:\psqlodbc_xxxx.log): Log communications to/from the backend to that file. This is good for debugging problems.
     
  • Parse Statements: Tell the driver how to gather the information about result columns of queries, if the application requests that information before executing the query. See also ServerSide Prepare options.
    The driver checks this option first. If disabled then it checks the Server Side Prepare option.

    If this option is enabled, the driver will parse an SQL query statement to identify the columns and tables and gather statistics about them such as precision, nullability, aliases, etc. It then reports this information in SQLDescribeCol, SQLColAttributes, and SQLNumResultCols.

    When this option is disabled (the default), the query is sent to the server to be parsed and described. If the parser can not deal with a column (because it is a function or expression, etc.), it will fall back to describing the statement in the server. The parser is fairly sophisticated and can handle many things such as column and table aliases, quoted identifiers, literals, joins, cross-products, etc. It can correctly identify a function or expression column, regardless of the complexity, but it does not attempt to determine the data type or precision of these columns.
     
  • Ignore Timeout: Ignore SQL_ATTR_QUERY_TIMEOUT set using SQLSetStmtAttr(). Some tools issue SQLSetStmtAttr(.., SQL_ATTR_QUERY_TIMEOUT, ...) internally and sometimes it's difficult for users to change the value.
     
  • MyLog (C:\mylog_xxxx.log): Log debug messages to that file. This is good for debugging problems with the driver.
     
  • Unknown Sizes: This controls what SQLDescribeCol and SQLColAttributes will return as to precision for character data types (varchar, text, and unknown) in a result set when the precision is unknown. This was more of a workaround for pre-6.4 versions of PostgreSQL not being able to return the defined column width of the varchar data type.

    • Maximum: Always return the maximum precision of the data type.
    • Dont Know: Return "Don't Know" value and let application decide.
    • Longest: Return the longest string length of the column of any row. Beware of this setting when using cursors because the cache size may not be a good representation of the longest column in the cache.

    • MS Access: Seems to handle Maximum setting ok, as well as all the others.
      Borland: If sizes are large and lots of columns, Borland may crash badly (it doesn't seem to handle memory allocation well) if using Maximum size.

  • Data Type Options: affects how some data types are mapped:
     
    • Text as LongVarChar: PostgreSQL TEXT type is mapped to SQLLongVarchar, otherwise SQLVarchar.
    • Unknowns as LongVarChar: Unknown types (arrays, etc) are mapped to SQLLongVarChar, otherwise SQLVarchar
    • Bools as Char: Bools are mapped to SQL_CHAR, otherwise to SQL_BIT.

  • Max Varchar The maximum precision of the Varchar and BPChar(char[x]) types. The default is 254 which actually means 255 because of the null terminator. Note, if you set this value higher than 254, Access will not let you index on varchar columns!
     
  • Cache Size:When using cursors, this is the row size of the tuple cache and the default is 100 rows. If not using cursors, this has no meaning.
     
  • Max LongVarChar: The maximum precision of the LongVarChar type. The default is 4094 which actually means 4095 with the null terminator. You can even specify (-4) for this size, which is the odbc SQL_NO_TOTAL value.
     
  • SysTable Prefixes: Additional prefixes of table names to regard as System Tables. Tables that begin with "pg_" are always treated as system tables, even without this option. Separate each prefix with a semicolon (;)
     
  • Batch Size:Chunk size when executing batches with arrays of parameters. Setting 1 to this option forces one by one execution (the behavior before).
     

Advanced Options 2/3 Dialog Box

  • ReadOnly: Whether the datasource will allow updates.
     
  • Show System Tables: The driver will treat system tables as regular tables in SQLTables. This is good for Access so you can see system tables.
     
  • LF <-> CR/LF conversion: Convert Unix style line endings to DOS style.
     
  • Updateable Cursors: Enable updateable cursor emulation in the driver.
     
  • Bytea as LO: Allow the use of bytea columns for Large Objects.
     
  • Row Versioning: Allows applications to detect whether data has been modified by other users while you are attempting to update a row. It also speeds the update process since every single column does not need to be specified in the where clause to update a row. The driver uses the "xmin" system field of PostgreSQL to allow for row versioning. Microsoft products seem to use this option well. See the faq for details on what you need to do to your database to allow for the row versioning feature to be used.
     
  • Display Optional Error Message: Display optional(detail, hint, statement position etc) error messages.
     
  • True is -1: Represent TRUE as -1 for compatibility with some applications.
     
  • Server side prepare:If set, the driver uses server-side prepared statements. See also Parse Statement option. Note that if a query needs to be described before execution, e.g. because the application calls SQLDescribeCol() or SQLNumResultCols() before SQLExecute(), the driver will send a Parse request to the server even if this option is disabled. In that case, the query that is sent to the server for parsing will have the parameter markers replaced with the actual parameter values, or NULL literals if the values are not known yet.
     
  • Int8 As: Define what datatype to report int8 columns as.
     
  • Numeric As: Specify the map from numeric items without precision to SQL data types. numeric(default), varchar, double or memo(SQL_LONGVARCHAR) can be specified.
     
  • Extra Opts: combination of the following bits.

      0x1: Force the output of short-length formatted connection string. Check this bit when you use MFC CDatabase class.
      0x2: Fake MS SQL Server so that MS Access recognizes PostgreSQL's serial type as AutoNumber type.
      0x4: Reply ANSI (not Unicode) char types for the inquiries from applications. Try to check this bit when your applications don't seem to be good at handling Unicode data.
     
  • Level of rollback on errors: Specifies what to rollback should an error occur.
     
    • Nop(0): Don't rollback anything and let the application handle the error.
       
    • Transaction(1): Rollback the entire transaction.
       
    • Statement(2): Rollback the statement.
       

    • Setup note: This specification is set up with the PROTOCOL option parameter.

      PROTOCOL=7.4-(0|1|2)
      default value is Statement (it is Transaction for servers before 8.0).

  • OID Options:
     
    • Show Column: Includes the OID in SQLColumns. This is good for using as a unique identifier to update records if no good key exists OR if the key has many parts, which blows up the backend.
       
    • Fake Index: This option fakes a unique index on OID. This is useful when there is not a real unique index on OID and for apps which can't ask what the unique identifier should be (i.e, Access 2.0).
       
  • Connect Settings: The driver sends these commands to the backend upon a successful connection.  It sends these settings AFTER it sends the driver "Connect Settings". Use a semi-colon (;) to separate commands. This can now handle any query, even if it returns results. The results will be thrown away however!
     
  • TCP KEEPALIVE setting(by sec): Specifies TCP keepalives setting.
     
    • disable: Check when client-side TCP keepalives are not used.
       
    • idle time: The number of seconds of inactivity after which TCP should send a keepalive message to the server.
       
    • interval: The number of seconds after which a TCP keepalive message that is not acknowledged by the server should be retransmitted.
       

Advanced Options 3/3 Dialog Box

  • Allow connections urecoverable by MSDTC?: How to test distributed transactions.
     
    • yes: MSDTC is needless unless applications crash. So don't check the connectivity from MSDTC.
       
    • rejects sslmode verify-[ca|full]: reject ssl connections with verify-ca or verify-full mode because in those cases msdtc could hardly establish the connection.
       
    • no: First confirm the connectivity from MSDTC.
       

  • Libpq parameters: Specify libpq connection parameters with conninfo style strings e.g. sslrootcert=c:\\myfolder\\myroot sslcert=C:\\myfolder\\mycert sslkey=C:\\myfolder\\mykey.
    Though host, port, dbname, user, password, sslmode, keepalives_idle or keepalive_interval parameters can be set using this(pqopt) option, the use is not recommended because they are ordinarily set by other options. When some settings for those parameters conflict with other ordinary options, connections are rejected.
     

Global settings Dialog Box

This dialog allows you to specify pre-connection/default logging options

  • CommLog (C:\psqlodbc_xxxx.log - Communications log): Log communications to/from the backend to that file. This is good for debugging problems.
     
  • MyLog (C:\mylog_xxxx.log - Detailed debug output): Log debug messages to that file. This is good for debugging problems with the driver.
     
  • MSDTCLog (C:\pgdtclog\mylog_xxxx.log - MSDTC debug output): Log debug messages to that file. This is good for debugging problems with the MSDTC.
     
  • Specification of the holder for log outputs: Adjustment of write permission.
     

Manage DSN Dialog Box

This dialog allows you to select which PostgreSQL ODBC driver to use for this connection. Note that this may not work with third party drivers.

How to specify as a connection option

There is a method of specifying a connection option in a keyword strings.

Example:VBA

  • myConn = "ODBC;DRIVER={PostgreSQL Unicode};" & serverConn & _
    "A0=0;A1=7.4;A2=0;A3=0;A4=0;A5=0;A6=;A7=100;A8=4096;A9=0;" & _
    "B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;BI=-5;" & _
    "C0=0;C2=dd_;C4=1;C5=1;C6=1;C7=1;C8=1;C9=0;CA=verify-full;D1=30;D4=40;" & _
    "D5={sslrootcert=C:\\myfolder\\myroot sslcert=C:\\myfolder\\mycert sslkey=C:\\myfolder\\mykey}"

Please refer to a keyword list for details.

PostgreSQL PDO Driver (PDO_PGSQL) PostgreSQL PDO Driver
&reftitle.intro; PDO_PGSQL is a driver that implements the PHP Data Objects (PDO) interface to enable access from PHP to PostgreSQL databases.
&reftitle.resources; This extension defines a stream resource returned by PDO::pgsqlLOBOpen.
&reference.pdo-pgsql.configure; &reference.pdo-pgsql.constants;
General notes bytea fields are returned as streams.
PDO_PGSQL DSN Connecting to PostgreSQL databases &reftitle.description; The PDO_PGSQL Data Source Name (DSN) is composed of the following elements, delimited by spaces or semicolons: DSN prefix The DSN prefix is pgsql:. host The hostname on which the database server resides. port The port on which the database server is running. dbname The name of the database. user The name of the user for the connection. If you specify the user name in the DSN, PDO ignores the value of the user name argument in the PDO constructor. password The password of the user for the connection. If you specify the password in the DSN, PDO ignores the value of the password argument in the PDO constructor. sslmode The SSL mode. Supported values and their meaning are listed in the PostgreSQL Documentation. All semicolons in the DSN string are replaced by spaces, because PostgreSQL expects this format. This implies that semicolons in any of the components (e.g. password or dbname) are not supported. &reftitle.examples; PDO_PGSQL DSN examples The following example shows a PDO_PGSQL DSN for connecting to a PostgreSQL database: The following example shows a PDO_PGSQL DSN for connecting to a PostgreSQL database via unix socket /tmp/.s.PGSQL.5432: &reference.pdo-pgsql.entities.pdo-overloaded;
# Copyright (c) 2021-2024, Crate.io Inc. # Distributed under the terms of the AGPLv3 license, see LICENSE. from cratedb_toolkit.info.model import InfoElement, LogElement from cratedb_toolkit.info.util import get_single_value class Library: """ A collection of SQL queries and utilities suitable for diagnostics on CrateDB. Credits to the many authors and contributors of CrateDB diagnostics utilities, dashboards, and cheat sheets. Acknowledgements: Baurzhan Sakhariev, Eduardo Legatti, Georg Traar, Hernan Cianfagna, Ivan Sanchez Valencia, Karyn Silva de Azevedo, Niklas Schmidtmer, Walter Behmann. References: - https://community.cratedb.com/t/similar-elasticsearch-commands/1455/4 - CrateDB Admin UI. - CrateDB Grafana General Diagnostics Dashboard. - Debugging CrateDB - Queries Cheat Sheet. """ class Health: """ CrateDB health check queries. """ backups_recent = InfoElement( name="backups_recent", label="Recent Backups", sql=""" SELECT repository, name, finished, state FROM sys.snapshots ORDER BY finished DESC LIMIT 10; """, description="Most recent 10 backups", ) cluster_name = InfoElement( name="cluster_name", label="Cluster name", sql=r"SELECT name FROM sys.cluster;", transform=get_single_value("name"), ) nodes_count = InfoElement( name="cluster_nodes_count", label="Total number of cluster nodes", sql=r"SELECT COUNT(*) AS count FROM sys.nodes;", transform=get_single_value("count"), ) nodes_list = InfoElement( name="cluster_nodes_list", label="Cluster Nodes", sql="SELECT * FROM sys.nodes ORDER BY hostname;", description="Telemetry information for all cluster nodes.", ) table_health = InfoElement( name="table_health", label="Table Health", sql="SELECT health, COUNT(*) AS table_count FROM sys.health GROUP BY health;", description="Table health short summary", ) class JobInfo: """ Information distilled from `sys.jobs_log` and `sys.jobs`. """ age_range = InfoElement( name="age_range", label="Query age range", description="Timestamps of first and last job", sql=""" SELECT MIN(started) AS "first_job", MAX(started) AS "last_job" FROM sys.jobs_log; """, ) by_user = InfoElement( name="by_user", label="Queries by user", sql=r""" SELECT username, COUNT(username) AS count FROM sys.jobs_log GROUP BY username ORDER BY count DESC; """, description="Total number of queries per user.", ) duration_buckets = InfoElement( name="duration_buckets", label="Query Duration Distribution (Buckets)", sql=""" WITH dur AS ( SELECT ended-started::LONG AS duration FROM sys.jobs_log ), pct AS ( SELECT [0.25,0.5,0.75,0.99,0.999,1] pct_in, percentile(duration,[0.25,0.5,0.75,0.99,0.999,1]) as pct, count(*) cnt FROM dur ) SELECT UNNEST(pct_in) * 100 AS bucket, cnt - CEIL(UNNEST(pct_in) * cnt) AS count, CEIL(UNNEST(pct)) duration ---cnt FROM pct; """, description="Distribution of query durations, bucketed.", ) duration_percentiles = InfoElement( name="duration_percentiles", label="Query Duration Distribution (Percentiles)", sql=""" SELECT min(ended-started::LONG) AS min, percentile(ended-started::LONG, 0.50) AS p50, percentile(ended-started::LONG, 0.90) AS p90, percentile(ended-started::LONG, 0.99) AS p99, MAX(ended-started::LONG) AS max FROM sys.jobs_log LIMIT 50; """, description="Distribution of query durations, percentiles.", ) history100 = InfoElement( name="history", label="Query History", sql=""" SELECT started AS "time", stmt, (ended::LONG - started::LONG) AS duration, username FROM sys.jobs_log WHERE stmt NOT ILIKE '%snapshot%' ORDER BY time DESC LIMIT 100; """, transform=lambda x: list(reversed(x)), description="Statements and durations of the 100 recent queries / jobs.", ) history_count = InfoElement( name="history_count", label="Query History Count", sql=""" SELECT COUNT(*) AS job_count FROM sys.jobs_log; """, transform=get_single_value("job_count"), description="Total number of queries on this node.", ) performance15min = InfoElement( name="performance15min", label="Query performance 15min", sql=r""" SELECT CURRENT_TIMESTAMP AS last_timestamp, (ended / 10000) * 10000 + 5000 AS ended_time, COUNT(*) / 10.0 AS qps, TRUNC(AVG(ended::BIGINT - started::BIGINT), 2) AS duration, UPPER(regexp_matches(stmt,'^\s*(\w+).*')[1]) AS query_type FROM sys.jobs_log WHERE ended > now() - ('15 minutes')::INTERVAL GROUP BY 1, 2, 5 ORDER BY ended_time ASC; """, description="The query performance within the last 15 minutes, including two metrics: " "queries per second, and query speed (ms).", ) running = InfoElement( name="running", label="Currently Running Queries", sql=""" SELECT started AS "time", stmt, (CURRENT_TIMESTAMP::LONG - started::LONG) AS duration, username FROM sys.jobs WHERE stmt NOT ILIKE '%snapshot%' ORDER BY time; """, description="Statements and durations of currently running queries / jobs.", ) running_count = InfoElement( name="running_count", label="Number of running queries", sql=""" SELECT COUNT(*) AS job_count FROM sys.jobs; """, transform=get_single_value("job_count"), description="Total number of currently running queries.", ) top100_count = InfoElement( name="top100_count", label="Query frequency", description="The 100 most frequent queries.", sql=""" SELECT stmt, COUNT(stmt) AS stmt_count, MAX((ended::LONG - started::LONG) ) AS max_duration, MIN((ended::LONG - started::LONG) ) AS min_duration, AVG((ended::LONG - started::LONG) ) AS avg_duration, PERCENTILE((ended::LONG - started::LONG), 0.99) AS p90 FROM sys.jobs_log GROUP BY stmt ORDER BY stmt_count DESC LIMIT 100; """, ) top100_duration_individual = InfoElement( name="top100_duration_individual", label="Individual Query Duration", description="The 100 queries by individual duration.", sql=""" SELECT (ended::LONG - started::LONG) AS duration, stmt FROM sys.jobs_log ORDER BY duration DESC LIMIT 100; """, unit="ms", ) top100_duration_total = InfoElement( name="top100_duration_total", label="Total Query Duration", description="The 100 queries by total duration.", sql=""" SELECT SUM(ended::LONG - started::LONG) AS total_duration, stmt, COUNT(stmt) AS stmt_count FROM sys.jobs_log GROUP BY stmt ORDER BY total_duration DESC LIMIT 100; """, unit="ms", ) class Logs: """ Access `sys.jobs_log` for logging purposes. """ """ TODO: Implement `tail` in one way or another. -- https://stackoverflow.com/q/4714975 @seut says: why? whats the issue with sorting it desc by ended? As the table will be computed by results of all nodes inside the cluster, the natural ordering might not be deterministic. Ideas:: SELECT * FROM sys.jobs_log OFFSET -10; SELECT * FROM sys.jobs_log OFFSET (SELECT count(*) FROM sys.jobs_log)-10; - https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html#to-char-expression-format-string - https://cratedb.com/docs/crate/reference/en/latest/general/builtins/scalar-functions.html#date-format-format-string-timezone-timestamp """ user_queries_latest = LogElement( name="user_queries_latest", label="Latest User Queries", sql=r""" SELECT DATE_FORMAT('%Y-%m-%dT%H:%i:%s.%f', started) AS started, DATE_FORMAT('%Y-%m-%dT%H:%i:%s.%f', ended) AS ended, classification, stmt, username, node FROM sys.jobs_log WHERE stmt NOT LIKE '%sys.%' AND stmt NOT LIKE '%information_schema.%' ORDER BY ended DESC LIMIT {limit}; """, ) class Replication: """ Information about logical replication. """ # https://github.com/crate/crate/blob/master/docs/admin/logical-replication.rst#monitoring subscriptions = """ SELECT s.subname, s.subpublications, sr.srrelid::text, sr.srsubstate, sr.srsubstate_reason FROM pg_subscription s JOIN pg_subscription_rel sr ON s.oid = sr.srsubid ORDER BY s.subname; """ class Resources: """ About system resources. """ # TODO: Needs templating. column_cardinality = """ SELECT tablename, attname, n_distinct FROM pg_stats WHERE schemaname = '...' AND tablename IN (...) AND attname IN (...); """ file_descriptors = """ SELECT name AS node_name, process['open_file_descriptors'] AS "open_file_descriptors", process['max_open_file_descriptors'] AS max_open_file_descriptors FROM sys.nodes ORDER BY node_name; """ heap_usage = """ SELECT name AS node_name, heap['used'] / heap['max']::DOUBLE AS heap_used FROM sys.nodes ORDER BY node_name; """ tcp_connections = """ SELECT name AS node_name, connections FROM sys.nodes ORDER BY node_name; """ # TODO: Q: Why "14"? Is it about only getting information about the `write` thread pool? # A: Yes, the `write` thread pool will be exposed as the last entry inside this array. # But this may change in future. thread_pools = """ SELECT name AS node_name, thread_pools[14]['queue'], thread_pools[14]['active'], thread_pools[14]['threads'] FROM sys.nodes ORDER BY node_name; """ class Settings: """ Reflect cluster settings. """ info = """ SELECT name, master_node, settings['cluster']['routing']['allocation']['cluster_concurrent_rebalance'] AS cluster_concurrent_rebalance, settings['indices']['recovery']['max_bytes_per_sec'] AS max_bytes_per_sec FROM sys.cluster LIMIT 1; """ class Shards: """ Information about shard / node / table / partition allocation and rebalancing. """ # https://cratedb.com/docs/crate/reference/en/latest/admin/system-information.html#example # TODO: Needs templating. for_table = """ SELECT schema_name, table_name, id, partition_ident, num_docs, primary, relocating_node, routing_state, state, orphan_partition FROM sys.shards WHERE schema_name = '{schema_name}' AND table_name = '{table_name}'; """ # Identify the location of the shards for each partition. # TODO: Needs templating. location_for_partition = """ SELECT table_partitions.table_schema, table_partitions.table_name, table_partitions.values[{partition_column}]::TIMESTAMP, shards.primary, shards.node['name'] FROM sys.shards JOIN information_schema.table_partitions ON shards.partition_ident=table_partitions.partition_ident WHERE table_partitions.table_name = {table_name} ORDER BY 1,2,3,4,5; """ allocation = InfoElement( name="shard_allocation", sql=""" SELECT IF(primary = TRUE, 'primary', 'replica') AS shard_type, COUNT(*) AS shards FROM sys.allocations WHERE current_state != 'STARTED' GROUP BY 1 """, label="Shard Allocation", description="Support identifying issues with shard allocation.", ) max_checkpoint_delta = InfoElement( name="max_checkpoint_delta", sql=""" SELECT COALESCE(MAX(seq_no_stats['local_checkpoint'] - seq_no_stats['global_checkpoint']), 0) AS max_checkpoint_delta FROM sys.shards; """, transform=get_single_value("max_checkpoint_delta"), label="Delta between local and global checkpoint", description="If the delta between the local and global checkpoint is significantly large, " "shard replication might have stalled or slowed down.", ) # data-hot-2 262 # data-hot-1 146 node_shard_distribution = InfoElement( name="node_shard_distribution", label="Shard Distribution", sql=""" SELECT node['name'] AS node_name, COUNT(*) AS num_shards FROM sys.shards WHERE primary = true GROUP BY node_name; """, description="Shard distribution across nodes.", ) not_started = InfoElement( name="shard_not_started", label="Shards not started", sql=""" SELECT * FROM sys.allocations WHERE current_state != 'STARTED'; """, description="Information about shards which have not been started.", ) not_started_count = InfoElement( name="shard_not_started_count", label="Number of shards not started", description="Total number of shards which have not been started.", sql=""" SELECT COUNT(*) AS not_started_count FROM sys.allocations WHERE current_state != 'STARTED'; """, transform=get_single_value("not_started_count"), ) rebalancing_progress = InfoElement( name="shard_rebalancing_progress", label="Shard Rebalancing Progress", sql=""" SELECT table_name, schema_name, recovery['stage'] AS recovery_stage, AVG(recovery['size']['percent']) AS progress, COUNT(*) AS count FROM sys.shards GROUP BY table_name, schema_name, recovery_stage; """, description="Information about rebalancing progress.", ) rebalancing_status = InfoElement( name="shard_rebalancing_status", label="Shard Rebalancing Status", sql=""" SELECT node['name'], id, recovery['stage'], recovery['size']['percent'], routing_state, state FROM sys.shards WHERE routing_state IN ('INITIALIZING', 'RELOCATING') ORDER BY id; """, description="Information about rebalancing activities.", ) table_allocation = InfoElement( name="table_allocation", label="Table Allocations", sql=""" SELECT table_schema, table_name, node_id, shard_id, partition_ident, current_state, decisions, explanation FROM sys.allocations; """, description="Table allocation across nodes, shards, and partitions.", ) table_allocation_special = InfoElement( name="table_allocation_special", label="Table Allocations Special", sql=""" SELECT decisions[2]['node_name'] AS node_name, COUNT(*) AS table_count FROM sys.allocations GROUP BY decisions[2]['node_name']; """, description="Table allocation. Special.", ) table_shard_count = InfoElement( name="table_shard_count", label="Table Shard Count", sql=""" SELECT table_schema, table_name, SUM(number_of_shards) AS num_shards FROM information_schema.table_partitions WHERE closed = false GROUP BY table_schema, table_name; """, description="Total number of shards per table.", ) total_count = InfoElement( name="shard_total_count", label="Number of shards", description="Total number of shards.", sql=""" SELECT COUNT(*) AS shard_count FROM sys.shards """, transform=get_single_value("shard_count"), ) # TODO: Are both `translog_uncommitted` items sensible? translog_uncommitted = InfoElement( name="translog_uncommitted", label="Uncommitted Translog", description="Check if translogs are committed properly by comparing the " "`flush_threshold_size` with the `uncommitted_size` of a shard.", sql=""" SELECT sh.table_name, sh.partition_ident, SUM(sh.translog_stats['uncommitted_size']) / POWER(1024, 3) as "translog_uncomitted_in_gib" FROM information_schema.table_partitions tp JOIN sys.shards sh USING (table_name, partition_ident) WHERE sh.translog_stats['uncommitted_size'] > settings['translog']['flush_threshold_size'] GROUP BY 1, 2 ORDER BY 3 DESC; """, ) translog_uncommitted_size = InfoElement( name="translog_uncommitted_size", label="Total uncommitted translog size", description="A large number of uncommitted translog operations can indicate issues with shard replication.", sql=""" SELECT COALESCE(SUM(translog_stats['uncommitted_size']), 0) AS translog_uncommitted_size FROM sys.shards; """, transform=get_single_value("translog_uncommitted_size"), unit="bytes", )# Datasets API Provide access to datasets, to be easily consumed by tutorials and/or production applications. ## Install ```shell pip install --upgrade 'cratedb-toolkit[datasets]' ``` ## Synopsis ```python from cratedb_toolkit.datasets import load_dataset dataset = load_dataset("tutorial/weather-basic") print(dataset.ddl) ``` ## Usage ### Built-in datasets Load example datasets into CrateDB database tables. ```python from cratedb_toolkit.datasets import load_dataset # Weather data example. dataset = load_dataset("tutorial/weather-basic") dataset.dbtable(dburi="crate://crate@localhost/", table="weather_data").load() ``` ```python from cratedb_toolkit.datasets import load_dataset # UK wind farm data example. dataset = load_dataset("tutorial/windfarm-uk-info") dataset.dbtable(dburi="crate://crate@localhost/", table="windfarms").load() dataset = load_dataset("tutorial/windfarm-uk-data") dataset.dbtable(dburi="crate://crate@localhost/", table="windfarm_output").load() ``` ### Kaggle For accessing datasets on Kaggle, you will need an account on their platform. #### Authentication Either create a configuration file `~/.kaggle/kaggle.json` in JSON format, ```json {"username":"acme","key":"134af98bdb0bd0fa92078d9c37ac8f78"} ``` or, alternatively, use those environment variables. ```shell export KAGGLE_USERNAME=acme export KAGGLE_KEY=134af98bdb0bd0fa92078d9c37ac8f78 ``` #### Acquisition Load a dataset on Kaggle into a CrateDB database table. ```python from cratedb_toolkit.datasets import load_dataset dataset = load_dataset("kaggle://guillemservera/global-daily-climate-data/daily_weather.parquet") dataset.dbtable(dburi="crate://crate@localhost/", table="kaggle_daily_weather").load() ``` ## In Practice Please refer to those notebooks to learn how `load_dataset` works in practice. - [How to Build Time Series Applications in CrateDB] - [Exploratory data analysis with CrateDB] - [Time series decomposition with CrateDB] [Exploratory data analysis with CrateDB]: https://github.com/crate/cratedb-examples/blob/main/topic/timeseries/exploratory_data_analysis.ipynb [How to Build Time Series Applications in CrateDB]: https://github.com/crate/cratedb-examples/blob/main/topic/timeseries/dask-weather-data-import.ipynb [Time series decomposition with CrateDB]: https://github.com/crate/cratedb-examples/blob/main/topic/timeseries/time-series-decomposition.ipynb# CrateDB GTFS / GTFS-RT Transit Data Demo ## Introduction This is a demo application that has a Python back end and JavaScript / Leaflet maps front end. It uses GTFS ([General Transit Feed Specification](https://gtfs.org/)) and GTFS-RT (the extra [realtime feeds for GTFS](https://gtfs.org/documentation/realtime/reference/)) to store and analyze transit system route, trip, stop and vehicle movement data in [CrateDB](https://cratedb.com). GTFS and GRTFS-RT are standard ways of representing this type of data. This means that, in theory, this project could be applicable to any transit system that adopts this approach. However, there can be differences between transit agencies, so some aspects of the project may need adapting for that. We have developed this demo using GTFS and GTFS-RT data from the [Washington Metropolitan Area Transit Authority](https://www.wmata.com/about/developers/) (WMATA), specifically for the DC Metro train system. The design of the database schema allows for data from multiple agencies / transit systems to be stored as long as each agency has a unique agency ID. Here's a sped up demo of the front end running, showing train movements on the DC Metro system: ![Demo showing front end running](gtfs_demo_front_end_sped_up.gif) Individual trains can be tracked by clicking on them, which displays information about the train's current trip in a popup: ![Demo showing details of a single train trip](gtfs_demo_front_end.png) ## Prerequisites To run this project you'll need to install the following software: * Python 3 ([download](https://www.python.org/downloads/)) - we've tested this project with Python 3.12.2 on macOS Sequoia. * Git command line tools ([download](https://git-scm.com/downloads)). * Your favorite code editor, to edit configuration files and browse/edit the code if you wish. Visual Studio Code is great for this. * Access to a cloud or local CrateDB cluster (see below for details). * A WMATA API key. These are free, and you can register for API access and get your key at the [WMATA developer portal](https://developer.wmata.com/). ## Getting the Code Next you'll need to get a copy of the code from GitHub by cloning the repository. Open up your terminal and change directory to wherever you store coding projects, then enter the following commands: ```bash git clone https://github.com/crate/devrel-gtfs-transit.git cd devrel-gtfs-transit ``` ## Getting a CrateDB Database You'll need a CrateDB database to store the project's data in. Choose between a free hosted instance in the cloud, or run the database locally. Either option is fine. ### Cloud Option Create a database in the cloud by first pointing your browser at [`console.cratedb.cloud`](https://console.cratedb.cloud/). Login or create an account, then follow the prompts to create a "CRFREE" database on shared infrastructure in the cloud of your choice (choose from Amazon AWS, Microsoft Azure and Google Cloud). Pick a region close to where you live to minimize latency between your machine running the code and the database that stores the data. Once you've created your cluster, you'll see a "Download" button. This downloads a text file containing a copy of your database hostname, port, username and password. Make sure to download these as you'll need them later and won't see them again. Your credentials will look something like this example (exact values will vary based on your choice of AWS/Google Cloud/Azure etc): ``` Host: some-host-name.gke1.us-central1.gcp.cratedb.net Port (PostgreSQL): 5432 Port (HTTPS): 4200 Database: crate Username: admin Password: the-password-will-be-here ``` Wait until the cluster status shows a green status icon and "Healthy" status before continuing. Note that it may take a few moments to provision your database. ### Local Option The best way to run CrateDB locally is by using Docker. We've provided a Docker Compose file for you. Once you've installed [Docker Desktop](https://www.docker.com/products/docker-desktop/), you can start the database like this: ```bash docker compose up ``` Once the database is up and running, you can access the console by pointing your browser at: ``` http://localhost:4200 ``` Note that if you have something else running on port 4200 (CrateDB admin UI) or port 5432 (Postgres protocol port) you'll need to stop those other services first, or edit the Docker compose file to expose these ports at different numbers on your local machine. ## Creating the Database Tables We've provided a Python data loader script that will create the database tables in CrateDB for you. You'll first need to create a virtual environment for the data loader and configure it: ```bash cd gtfs-static python -m venv venv . ./venv/bin/activate pip install -r requirements.txt ``` Now make a copy of the example environment file provided: ```bash cp env.example .env ``` Edit the `.env` file, changing the value of `CRATEDB_URL` to be the connection URL for your CrateDB database. If you're running CrateDB locally (for example with the provided Docker Compose file) there's nothing to change here. If you're running CrateDB in the cloud, change the connection URL as follows, using the values for your cloud cluster instance: ``` https://admin:@:4200 ``` Save your changes. Next, run the data loader to create the tables used by this project: ```bash python dataloader.py createtables ``` You should see output similar to this: ``` Created agencies table if needed. Created networks table if needed. Created routes table if needed. Created vehicle positions table if needed. Created trip updates table if needed. Created trips table if needed. Created stops table if needed. Created stop_times table if needed. Created config table if needed. Finished creating any necessary tables. ``` Use the CrateDB console to verify that the above named tables were all created in the `doc` schema. ## Load the Static Data The next step is to load static data about the transport network into the database. We'll use Washington DC (WMATA) as an example. First, load the configuration data for the agency: ```bash python dataloader.py config-files/wmata.json ``` Now, load data into the `agencies` table: ```bash python dataloader.py data-files/wmata/agency.txt ``` Next, populate the `routes` table: ```bash python dataloader.py data-files/wmata/routes.txt ``` Then the stops table. Here, `1` is the agency ID, and must match the spelling and capitalization of the agency ID in `agency.txt`: ```bash python dataloader.py data/files/wmata/stops.txt 1 ``` Finally, insert data into the `networks` table. Here `WMATA` is the agency name, and must match the spelling and capitalization of the agency name in `agency.txt`: ```bash python dataloader.py geojson/wmata/wmata.geojson WMATA ``` ## Start the Front End Flask Application This project has a web front end and a [Flask](https://flask.palletsprojects.com/) application server. The front end is written in vanilla JavaScript and uses the [Bulma](https://bulma.io/) framework for the majority of the styling. [Leaflet](https://leafletjs.com/) is used to render maps and handle map events. The Flask application uses the [CrateDB Python driver](https://cratedb.com/docs/python/en/latest/index.html) to talk to the database. Before starting the front end Flask application, you'll need to create a virtual environment and configure it: ```bash cd front-end python -m venv venv . ./venv/bin/activate pip install -r requirements.txt ``` Now make a copy of the example environment file provided: ```bash cp env.example .env ``` Edit the `.env` file, changing the value of `CRATEDB_URL` to be the connection URL for your CrateDB database. If you're running CrateDB locally (for example with the provided Docker Compose file) there's nothing to change here. If you're running CrateDB in the cloud, change the connection URL as follows, using the values for your cloud cluster instance: ``` https://admin:@:4200 ``` Now, edit the values of `GTFS_AGENCY_NAME` and `GTFS_AGENCY_ID` to contain the agency name and ID for the agency you're using. These should match the values returned by this query: ```sql SELECT agency_name, agency_id FROM agencies ``` For example, for Washington DC / WMATA, the correct settings are: ``` GTFS_AGENCY_NAME=WMATA GTFS_AGENCY_ID=1 ``` Don't forget that if either value contains a space, you'll need to surround the entire value with quotation marks. Save your changes. Now, start the front end application: ```bash python app.py ``` Using your browser, visit `http://localhost:8000` to view the map front end interface. At this point you should see the route map for the agency that you're working with, along with the stations / stops on the routes. Clicking a station or stop should show information about it. No vehicles will be visible on the map yet. To see these, you'll need to run the real time data receiver components (see below). When you're finished with the real time data receiver, stop it with `Ctrl-C` (but keep it running for now, so you'll be able to see the real time data soon...) ## Start the Real Time Data Receiver Components The real time data receivers are responsible for reading real time vehicle location and other data from the transit agencies and saving it in the database. First, create a virtual environment and install the dependencies: ```bash cd front-end python -m venv venv . ./venv/bin/activate pip install -r requirements.txt ``` Now make a copy of the example environment file provided: ```bash cp env.example .env ``` Edit the `.env` file, changing the value of `CRATEDB_URL` to be the connection URL for your CrateDB database. If you're running CrateDB locally (for example with the provided Docker Compose file) there's nothing to change here. If you're running CrateDB in the cloud, change the connection URL as follows, using the values for your cloud cluster instance: ``` https://admin:@:4200 ``` Now, edit the value of `GTFS_AGENCY_ID` to contain the ID for the agency you're using. It should match the value returned by this query: ```sql SELECT agency_id FROM agencies ``` For example, for Washington DC / WMATA, the correct setting is: ``` GTFS_AGENCY_ID=1 ``` Set the value of `SLEEP_INTERVAL` to be the number of seconds that the component sleeps between checking the transit agency for updates. This defaults to `1`, but you may need to set a longer interval if the agency you're using implements rate limiting on its API endpoints. Next, set the value of `GTFS_POSITIONS_FEED_URL` to the realtime vehicle movements endpoint URL for your agency. For example for Washington DC / WMATA this is `https://api.wmata.com/gtfs/rail-gtfsrt-vehiclepositions.pb`. Set the value of `GTFS_TRIPS_FEED_URL` to the realtime trip updates endpoint URL for your agency. For example for Washington DC / WMATA this is `https://api.wmata.com/gtfs/rail-gtfsrt-tripupdates.pb`. Set the value of `GTFS_TRIPS_SCHEDULE_URL` to the static GTFS URL for your agency. This will be a URL that serves a zip file. For example for Washington DC / WMATA this is `https://api.wmata.com/gtfs/rail-gtfs-static.zip`. Finally, if your agency requires an API key to access realtime data, set the values of `GTFS_POSITIONS_FEED_KEY`, `GTFS_TRIPS_FEED_KEY` and `GTFS_TRIPS_SCHEDULE_KEY` appropriately. You'll most likely use the same API key for each. Save your changes. The schedule of trips is stored in two tables in CrateDB: `trips` and `stop_times`. You need to update this **once daily** by running: ```bash python trip_schedule.py 1 ``` Start gathering real time vehicle position data continuously by running this command: ```bash python vehicle_positions.py ``` You should also start continuous gathering of real time trip update data by running: ```bash python trip_updates.py ``` When you're finished with the real time data receivers, stop them with `Ctrl-C`. Assuming that the Flask front end web application is running, you should now see vehicle movement details at `http://localhost:8000`. Clicking a vehicle should display a pop up with information about the trip that the vehicle is currently on: trip ID, next stops, time estimates etc. ## Analyzing the Data Once the system's been running for a while, you might want to run some queries that analyze and aggregate data. We've provided some examples in the [`example_queries.md`](example_queries.md) file. ## Work in Progress Notes Below Getting GeoJSON from GTFS: https://github.com/BlinkTagInc/gtfs-to-geojson ```bash cd gtfs-static gtfs-to-geojson --configPath ./config_wmata.json ``` Getting GTFS static data for WMATA rail: ```bash wget --header="api_key: " https://api.wmata.com/gtfs/rail-gtfs-static.zip ```# CrateDB Offshore Wind Farms Demo Application ## Introduction This is a basic demo application that visualizes data in the UK Offshore wind farms example dataset. Scroll around the map to see the locations of the wind farms and click on a marker to see details about that wind farm's performance. Zoom in to see the boundaries of each wind farm as a polygon - click on that to display a pop up with additional data. Finally, scroll in some more to see the locations of individual turbines. ![Demo showing front end interactions](demo.gif) Other resources that use this dataset include: * A video of a live stream introducing CrateDB and looking at the C# code for this project. [Watch it on YouTube](https://www.youtube.com/watch?v=K4zBhJhSFCY). * A conference talk from the AI and Big Data Expo, Amsterdam 2024. [Watch it on YouTube](https://www.youtube.com/watch?v=xqiLGjaTlBk). * A Jupyter notebook that lets you explore the queries shown in the conference talk. [Run it on Google Colab](https://github.com/crate/cratedb-examples/tree/main/topic/multi-model). * The raw data for this dataset, as JSON files. [Clone the GitHub repository](https://github.com/crate/cratedb-datasets/tree/main/devrel/uk-offshore-wind-farm-data). Backend servers implementations for this project are available for C#, Python, Node.js, Go and Java. ## Prerequisites To run this project you'll need to install the following software: * (C# version) .NET SDK ([download](https://dotnet.microsoft.com/en-us/download)) - we've tested this project with version 9.0 on macOS Sequoia. * (Python version) Python 3 ([download](https://www.python.org/downloads/)) - we've tested this project with Python 3.12 on macOS Sequoia. * (Node.js version) Node.js ([download](https://nodejs.org/en/download)) - we've tested this project with Node.js 22 on macOS Sequoia. * (Go version) Go ([download](https://go.dev/doc/install)) - we've tested this project with Go version 1.24.0 on macOS Sequoia. * (Java Version) a JDK ([download](https://openjdk.org/projects/jdk/21/)) - we've tested this project with OpenJDK 21.0.6 on macOS Sequoia. * (Java Version) Apache Maven ([download](https://maven.apache.org/)) - we've tested this project with Maven 3.9.6. * Git command line tools ([download](https://git-scm.com/downloads)). * Your favorite code editor, to edit configuration files and browse/edit the code if you wish. [Visual Studio Code](https://code.visualstudio.com/) is great for this. * Access to a cloud or local CrateDB cluster (see below for details). ## Getting the Code Next you'll need to get a copy of the code from GitHub by cloning the repository. Open up your terminal and change directory to wherever you store coding projects, then enter the following commands: ```bash git clone https://github.com/crate/devrel-offshore-wind-farms-demo.git cd devrel-offshore-wind-farms-demo ``` ## Getting a CrateDB Database You'll need a CrateDB database to store the project's data in. Choose between a free hosted instance in the cloud, or run the database locally. Either option is fine. ### Cloud Option Create a database in the cloud by first pointing your browser at [`console.cratedb.cloud`](https://console.cratedb.cloud/). Login or create an account, then follow the prompts to create a "CRFREE" database on shared infrastructure in the cloud of your choice (choose from Amazon AWS, Microsoft Azure and Google Cloud). Pick a region close to where you live to minimize latency between your machine running the code and the database that stores the data. Once you've created your cluster, you'll see a "Download" button. This downloads a text file containing a copy of your database hostname, port, username and password. Make sure to download these as you'll need them later and won't see them again. Your credentials will look something like this example (exact values will vary based on your choice of AWS/Google Cloud/Azure etc): ``` Host: some-host-name.gke1.us-central1.gcp.cratedb.net Port (PostgreSQL): 5432 Port (HTTPS): 4200 Database: crate Username: admin Password: the-password-will-be-here ``` Wait until the cluster status shows a green status icon and "Healthy" status before continuing. Note that it may take a few moments to provision your database. ### Local Option The best way to run CrateDB locally is by using Docker. We've provided a Docker Compose file for you. Once you've installed [Docker Desktop](https://www.docker.com/products/docker-desktop/), you can start the database like this: ```bash docker compose up ``` Once the database is up and running, you can access the console by pointing your browser at: ``` http://localhost:4200 ``` Note that if you have something else running on port 4200 (CrateDB admin UI) or port 5432 (Postgres protocol port) you'll need to stop those other services first, or edit the Docker compose file to expose these ports at different numbers on your local machine. ## Creating the Database Tables Now you have a database, you'll need to create the tables that this project uses. Copy and paste the following SQL command into the database console, then execute it to create a table named `windfarms`. (If your database is in the Cloud, you can find the console in the menu to the left when logged in at `console.cratedb.cloud`. If you are running the database locally then go to `localhost:4200` and select the console icon from the left hand menu): ```sql CREATE TABLE windfarms ( id TEXT PRIMARY KEY, name TEXT, description TEXT INDEX USING fulltext WITH (analyzer='english'), description_vec FLOAT_VECTOR(2048), location GEO_POINT, territory TEXT, boundaries GEO_SHAPE INDEX USING geohash WITH (PRECISION='1m', DISTANCE_ERROR_PCT=0.025), turbines OBJECT(STRICT) AS ( brand TEXT, model TEXT, locations ARRAY(GEO_POINT), howmany SMALLINT ), capacity DOUBLE PRECISION, url TEXT ); ``` Then copy and paste this statement into the console, and execute it to create a table named `windfarm_output`: ```sql CREATE TABLE windfarm_output ( windfarmid TEXT, ts TIMESTAMP WITHOUT TIME ZONE, month GENERATED ALWAYS AS date_trunc('month', ts), day TIMESTAMP WITH TIME ZONE GENERATED ALWAYS AS date_trunc('day', ts), output DOUBLE PRECISION, outputpercentage DOUBLE PRECISION ) PARTITIONED BY (day); ``` ## Populating the Tables with Sample Data Right now your database tables are empty. Let's add some sample data! Copy and paste the following SQL statement into the console then execute it to insert records for each windfarm into the `windfarms` table: ```sql COPY windfarms FROM 'https://cdn.crate.io/downloads/datasets/cratedb-datasets/devrel/uk-offshore-wind-farm-data/wind_farms.json' RETURN SUMMARY; ``` Examine the output of this command once it's completed. You should see that 45 records were loaded with 0 errors. Next, let's load the sample power generation data into the `windfarm_output` table. Copy and paste this SQL statement into the console, then execute it: ```sql COPY windfarm_output FROM 'https://cdn.crate.io/downloads/datasets/cratedb-datasets/devrel/uk-offshore-wind-farm-data/wind_farm_output.json.gz' WITH (compression='gzip') RETURN SUMMARY; ``` Examine the output of this command once it's completed. You should expect 75,825 records to have loaded with 0 errors. ## Configuring and Starting the Backend Server The backend server for this project has two different implementations, each with their own instructions. * To use C#, follow the instructions [here](dotnet/README.md). * To use Python, follow the instructions [here](python/README.md). * To use Node.js, follow the instructions [here](nodejs/README.md). * To use Go, follow the instructions [here](go/README.md). * To use Java, follow the instructions [here](java/README.md). ## Understanding the Code ### Server Code #### C# Version The server is written in C# and is contained in one file: `Program.cs`. This contains a minmal web application that runs code to access CrateDB when called on various endpoints, and also serves static files from the `wwwroot` folder. Database access is handled through [Npgsql](https://www.npgsql.org/index.html). #### Python Version The server is written in Python as a [Flask framework](https://flask.palletsprojects.com/) application. The code is contained in one file: `app.py`. This contains a minimal web application that runs code to access CrateDB when called on various endpoints, and also serves static files from the `static` folder. Database access is handled through [crate-python](https://github.com/crate/crate-python/). #### Node.js Version The server is written in JavaScript using the [Express framework](https://expressjs.com/). The code is contained in a single file: `server.js`. This contains a mimimal web application that runs code to access CrateDB when called on various endpoints, and also serves static files from the `static` folder. Database access is handled using the [node-postgres](https://node-postgres.com/) driver. #### Go Version The server is written in Go using the [Fiber framework](https://docs.gofiber.io/). The code is contained in a single file: `server.go`. This contains a minimal web application that runs code to access CrateDB when called on various endpoints, and also serves static files from the `public` folder. Database access is handled using the [pgx](https://github.com/jackc/pgx) driver. #### Java Version The server is written in Java using the [Dropwizard framework](https://www.dropwizard.io/en/stable/). The project contains a minimal web application that runs code to access CrateDB when called on various endpoints, and also serves static files from the `src/main/resources/assets` folder. Database access is handled using the [PostgreSQL JDBC driver](https://jdbc.postgresql.org/). ### Front End Code The front end is the same for each of the backend implementations of the project. It uses the [Leaflet map framework](https://leafletjs.com/) with [OpenStreetMap](https://wiki.openstreetmap.org/wiki/OpenStreetMap_Carto) standard tiles. [Font Awesome](https://fontawesome.com/) is also included in the project (for rendering icons). The [Bulma CSS framework](https://bulma.io/) is used for styling and layout. The JavaScript code for the front end is contained in one file. For the C# project, this is `wwwroot/js/app.js` and for the Python project it is `static/js/app.js`. It uses the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) to interact with the C# server. ## CrateDB Academy Want to learn more about CrateDB? Take our free online "CrateDB Fundamentals" course, available now at the [CrateDB Academy](https://cratedb.com/academy/fundamentals/).# CrateDB RAG / Hybrid Search PDF Chatbot ## Introduction This is a demo application that uses CrateDB as a data store for an AI powered chatbot. ![Demo showing the chatbot front end](chatbot_demo.gif) --- **Original code by [Wierd van der Haar](https://github.com/wierdvanderhaar), Senior Solution Engineer at CrateDB.** [Watch a video walkthrough of this project on YouTube](https://www.youtube.com/watch?v=7UqDAct0EWQ). --- Source data / knowledge is extracted from text and images inside PDF documents, converted to vector embeddings, then stored in CrateDB. It's also stored as plain text with a full-text index. Users ask questions of this knowledge base using a natural language chatbot interface. The code performs a hybrid search (K nearest neighbor and full-text keyword) over the data, using the results as context for an LLM to generate a natural language response. For a detailed explanation of how this project works, check out our blog series on the topic: * **Part One:** [Building AI Knowledge Assistants for Enterprise PDFs: A Strategic Approach](https://cratedb.com/blog/building-ai-knowledge-assistants-for-enterprise-pdfs). * **Part Two:** [Core Techniques Powering Enterprise Knowledge Assistants](https://cratedb.com/blog/core-techniques-in-an-enterprise-knowledge-assistants). * **Part Three:** [Designing the Consumption Layer for Enterprise Knowledge Assistants](https://cratedb.com/blog/designing-the-consumption-layer-for-enterprise-knowledge-assistants). * **Part Four:** [Step by Step Guide to Building a PDF Knowledge Assistant](https://cratedb.com/blog/step-by-step-guide-to-building-a-pdf-knowledge-assistant). If you prefer to see the details as a single document, [here'a a markdown version in GitHub](https://github.com/crate/cratedb-examples/tree/main/topic/chatbot). ## Prerequisites To run this project you'll need to install the following software: * Python 3 ([download](https://www.python.org/downloads/)) - we've tested this project with Python 3.12 on macOS Sequoia. * Git command line tools ([download](https://git-scm.com/downloads)). * Your favorite code editor, to edit configuration files and browse/edit the code if you wish. [Visual Studio Code](https://code.visualstudio.com/) is great for this. * Access to a cloud or local CrateDB cluster (see below for details). * Some PDF files that you want to use as the source data for the chatbot (we've included a couple of our own white papers to get you started). You'll also need an OpenAI API key with sufficient credits to run the code. Obtain an API key and see pricing here: [OpenAI API Pricing](https://openai.com/api/pricing/). ## Getting the Code Next you'll need to get a copy of the code from GitHub by cloning the repository. Open up your terminal and change directory to wherever you store coding projects, then enter the following commands: ```bash git clone https://github.com/crate/devrel-pdf-rag-chatbot.git cd devrel-pdf-rag-chatbot ``` ## Getting a CrateDB Database You'll need a CrateDB database to store the project's data in. Choose between a free hosted instance in the cloud, or run the database locally. Either option is fine. ### Cloud Option Create a database in the cloud by first pointing your browser at [`console.cratedb.cloud`](https://console.cratedb.cloud/). Login or create an account, then follow the prompts to create a "CRFREE" database on shared infrastructure in the cloud of your choice (choose from Amazon AWS, Microsoft Azure and Google Cloud). Pick a region close to where you live to minimize latency between your machine running the code and the database that stores the data. Once you've created your cluster, you'll see a "Download" button. This downloads a text file containing a copy of your database hostname, port, username and password. Make sure to download these as you'll need them later and won't see them again. Your credentials will look something like this example (exact values will vary based on your choice of AWS/Google Cloud/Azure etc): ``` Host: some-host-name.gke1.us-central1.gcp.cratedb.net Port (PostgreSQL): 5432 Port (HTTPS): 4200 Database: crate Username: admin Password: the-password-will-be-here ``` Wait until the cluster status shows a green status icon and "Healthy" status before continuing. Note that it may take a few moments to provision your database. ### Local Option The best way to run CrateDB locally is by using Docker. We've provided a Docker Compose file for you. Once you've installed [Docker Desktop](https://www.docker.com/products/docker-desktop/), you can start the database like this: ```bash docker compose up ``` Once the database is up and running, you can access the console by pointing your browser at: ``` http://localhost:4200 ``` Note that if you have something else running on port 4200 (CrateDB admin UI) or port 5432 (Postgres protocol port) you'll need to stop those other services first, or edit the Docker compose file to expose these ports at different numbers on your local machine. ## Preparing the PDF Files The chatbot uses data extracted from PDF files. This can include text and images. Place one or more PDF files in the `chatbot/static` folder before running either component. The data extractor component will read these, extract data from them and store it in CrateDB. The chatbot component's web interface uses the PDFs as static assets, so that links to the original document can be presented as part of the chatbot's response. We've included a couple of CrateDB White Papers in PDF format to get you started. You'll find them in the `chatbot/static` folder. ## Configuring and Running Each Component This project is organized into two components, each contained in their own folder with their own `README` document. You'll find further instructions for each component in its folder. * The **data extractor** component can be found in the [data-extractor](./data-extractor/) folder. This component is responsible for reading PDF files, extracting and chunking the text data and generating text descriptions of images. The data is then stored in CrateDB for full-text and vector similarity searches. You need to run this component once, before you use the chatbot. * The **chatbot** component is contained in the [`chatbot`](./chatbot/) folder. This component receives plain text queries from users, performs a hybrid vector and full-text search over data that the data extractor stored in CrateDB, and presents the results as a chatbot response. ## CrateDB Academy Want to learn more about CrateDB? Take our free online "CrateDB Fundamentals" course, available now at the [CrateDB Academy](https://cratedb.com/academy/fundamentals/).# CrateDB / Express Spatial Data Demo ## Introduction This is a quick demo showing geospatial functionality in [CrateDB](https://cratedb.com/). Click on the map to drop a marker in the waters around the British Isles then hit search to find out which [Shipping Forecast](https://en.wikipedia.org/wiki/Shipping_Forecast) region your marker is in. Add more markers to plot a course or draw a polygon then hit search again to see which regions you're traversing. Hover over a region to see the shipping forecast for it (data isn't real time, so don't use this to plan a voyage)! ![Demo showing an example polygon search](shippingforecast.gif) ## Prerequisites You'll need to install the following to try this project locally: - [Git command line tools](https://git-scm.com/downloads) (optional - if you don't have these, just get a Zip file from GitHub instead). - [Docker Desktop](https://www.docker.com/products/docker-desktop/). - [Node.js](https://nodejs.org/) (version 18 or higher - I've tested this with version 18.18.2). - [Crash](https://cratedb.com/docs/crate/crash/en/latest/getting-started.html#installation) - the shell tool for CrateDB. - A modern browser. I've tested this with [Google Chrome](https://www.google.com/chrome/). ## Getting Started Begin by cloning the source code repository from GitHub onto your local machine (if you chose to download the repository as a Zip file, unzip it instead). Be sure to change directory into the newly created `cratedb-demo` folder afterwards. ```shell git clone https://github.com/crate/devrel-shipping-forecast-geo-demo.git cd devrel-shipping-forecast-geo-demo ``` The application keeps a couple of configurable values in an environment file. Create a file called `.env` by copying the example file provided: ```shell cp env.example .env ``` You shouldn't need to change any of the default values in this file. Next, start a local instance of CrateDB with Docker. ```shell docker compose up -d ``` Once the Docker container is up and running your next steps are to create the required database schema and load the sample data. ```shell crash --host 'http://localhost:4200' < init.sql ``` Now install the Node/Express application's dependencies. ```shell npm install ``` You can now start the application. ```shell npm run dev ``` Point your browser at the following URL to interact with the application: ``` http://localhost:3000 ``` **Optional:** Navigate to CrateDB Admin to explore the database schema and sample data. ``` http://localhost:4200/ ``` ## Using the Application The application is map based... you'll see a map of the British Isles and surrounding seas. You can move around the map and zoom in and out using the usual controls. Click on the map to drop a marker. If you click "Search", the application will determine which (if any) Shipping Forecast region your marker is in and will outline that region on the map for you. Hover over the region to see details of its forecast (details are rerpresentative example data). Alternatively, drop some more markers on the map to build up a course around the British Isles. Click "Search" to see which Shipping Forecast regions your planned course passes through. Click the "Polygon" button to switch to polygon mode. Draw a polygon search area then click "Search" to see which Shipping Forecast regions interest with your search area. Click "Reset" to clear markers from the map and start again, or adjust your existing markers and click "Search". ## Shutting Down To stop the application, press `Ctrl-C`. Stop the container running CrateDB like so: ``` docker compose down ``` ## Optional: Extra Configuration The application has two configurable parameters. Their values are stored in the `.env` file. They are: * `PORT` - the port number that the front end runs on. This defaults to 3000, change it if you'd like to use another port. * `CRATE_URL` - the URL that the application uses to connect to CrateDB. This defaults to `http://localhost:4200`. If you'd like to use the cloud version of CrateDB, sign up here then change the URL value to point to your cloud instance, supplying your username and password. Example URL format: ```https://USER_NAME:PASSWORD@CLOUD_HOST_NAME:4200```. ## CrateDB Academy Want to learn more about CrateDB, including its geospatial capabilities? Take our free online "CrateDB Fundamentals" course, available now at the [CrateDB Academy](https://learn.cratedb.com).# TODO README Interested in learning more about this probject? There's a full explanation and demo in [this video on YouTube](https://www.youtube.com/watch?v=YIUJTbrwlAs). Callsigns in the last hour: ```sql select plane_id, callsign from planespotting.radio_messages where callsign is not null and ts >= now() - interval '1 hour'; ``` Latest data for planes that have a plane_id, callsign, altitude and position and updated in the last 2 minutes: ``` SELECT plane_id, callsign, (CURRENT_TIMESTAMP - latest_ts) as last_update, altitude, distance('POINT (-1.1436530095627766 52.94765937629119)', position) / 1000 as distance, position from ( select plane_id, (select callsign from planespotting.radio_messages where plane_id = planes.plane_id and callsign is not null order by ts desc limit 1) as callsign, (select altitude from planespotting.radio_messages where plane_id = planes.plane_id and altitude is not null order by ts desc limit 1) as altitude, (SELECT position from planespotting.radio_messages where plane_id = planes.plane_id and position is not null order by ts desc limit 1) as position, (SELECT ts from planespotting.radio_messages where plane_id = planes.plane_id order by ts desc limit 1) as latest_ts from (select distinct plane_id from planespotting.radio_messages) as planes ) as interesting_planes WHERE latest_ts >= current_timestamp - '2 mins'::interval and plane_id is not null and callsign is not null and altitude is not null and position is not null order by last_update ASC; ``` Grafana: https://cratedb.com/docs/guide/integrate/visualize/grafana.html# MongoDB/CrateDB/Grafana CDC Demonstration ## Introduction This is a small Python project that demonstrates how a CrateDB database can be populated and kept in sync with a collection in MongoDB using Change Data Capture (CDC). Before configuring and running this project, you should watch our [video walkthrough](https://www.youtube.com/watch?v=N8n-yg3ru8I). This project uses a hotel's front desk for its sample data. Imagine that guests can visit the front desk, or use the phone or the hotel's app to request different service items. Each request raises a job (for example "provide fresh towels", "call a taxi", "order room service food") that is associated with a room number. These jobs are stored as documents in a collection in MongoDB, and are updated periodically as staff members complete them - adding a completion time and the ID of the staff member who did the work. This data is then replicated to CrateDB for analysis and visualization in Grafana. The project includes Python scripts to create and update jobs in MongoDB, as well as an (optional) Grafana dashboard exported as a JSON file. ![Screenshot of the Grafana Dashboard](grafana_dashboard_screenshot.png) ## Prerequisites To run this project you'll need to install the following software: * Python 3 ([download](https://www.python.org/downloads/)) - we've tested this project with Python 3.12 on macOS Sequoia. * Git command line tools ([download](https://git-scm.com/downloads)). * Your favorite code editor, to edit configuration files and browse/edit the code if you wish. [Visual Studio Code](https://code.visualstudio.com/) is great for this. * Access to an instance of [CrateDB](https://console.cratedb.cloud) in the cloud. * Access to an instance of [MongoDB](https://www.mongodb.com/cloud/atlas/register) in the cloud. * Optional: Access to an instance of [Grafana](https://grafana.com/get/) in the cloud. ## Getting the Code Grab a copy of the code from GitHub by cloning the repository. Open up your terminal and change directory to wherever you store coding projects, then enter the following commands: ```bash git clone https://github.com/crate/devrel-mongo-cdc-demo.git cd devrel-mongo-cdc-demo ``` ## Getting a CrateDB Database in the Cloud Create a database in the cloud by first pointing your browser at [`console.cratedb.cloud`](https://console.cratedb.cloud/). Login or create an account, then follow the prompts to create a "CRFREE" database on shared infrastructure in the cloud of your choice (choose from Amazon AWS, Microsoft Azure and Google Cloud). Once you've created your cluster, you'll see a "Download" button. This downloads a text file containing a copy of your database hostname, port, username and password. Make sure to download these as you'll need them later and won't see them again. Your credentials will look something like this example (exact values will vary based on your choice of AWS/Google Cloud/Azure etc): ``` Host: some-host-name.gke1.us-central1.gcp.cratedb.net Port (PostgreSQL): 5432 Port (HTTPS): 4200 Database: crate Username: admin Password: the-password-will-be-here ``` Wait until the cluster status shows a green status icon and "Healthy" status before continuing. Note that it may take a few moments to provision your database. ### Database Schema Setup (CrateDB) This demo uses two tables, `jobs` and `staff`. The `jobs` table is created by the CDC data synchronization process. You'll need to create the `staff` table yourself by running the following SQL statements at your CrateDB console: ```sql CREATE TABLE staff ( id INTEGER, name TEXT ); ``` ```sql INSERT INTO staff (id, name) VALUES (1, 'Simon'), (2, 'Alice'), (3, 'Michael'), (4, 'Stefan'), (5, 'Alea'); ``` ## Getting a MongoDB Database in the Cloud You'll need to create a MongoDB database in the cloud - do this for free with [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register). Create an empty collection called `jobs` in your new MongoDB instance. You should also create a role, a user, and configure IP access for CrateDB's CDC process. Instructions for these steps can be found [here](https://cratedb.com/docs/cloud/en/latest/cluster/integrations/mongo-cdc.html#set-up-mongodb-atlas-authentication). ## Setting up CDC from CrateDB Cloud The next step is to set up a CDC integration in your CrateDB Cloud cluster. Follow our instructions [here](https://cratedb.com/docs/cloud/en/latest/cluster/integrations/mongo-cdc.html#set-up-integration-in-cratedb-cloud) to do this. When setting up the target table, be sure to name it `jobs`, and to select `dynamic` as the object type in the dropdown. ## Editing the Project Configuration File The Python scripts use a `.env` file to store the MongoDB connection details in. We've provided a template file that you can copy like this: ```bash cp env.example .env ``` Now, edit `.env` as follows: Set the value of `MONGODB_URI` to be: `mongodb+srv://:>@/?retryWrites=true&w=majority` replacing ``, `` and `` with the values for your MongoDB Atlas cluster. Set the value of `MONGODB_DATABASE` to be the name of the database in your MongoDB Atlas cluster. Set the value of `MONGODB_COLLECTION` to be `jobs`. Save your changes. Remember, this file contains secrets... don't commit it to source control! ## Setting up a Python Environment You should create and activate a Python Virtual Environment to install this project's dependencies into. To do this, run the following commands: ```bash python -m venv venv . ./venv/bin/activate ``` Now install the dependencies that this project requires: ```bash pip install -r requirements.txt ``` ## Running the Python Code ### Job Creator Component Run the job creator component to create new job documents in the `jobs` collection in MongoDB. It will generate a random job, add it to the collection, then sleep for a random time before repeating the process. ```bash python job_creator.py ``` You can stop the job creator with `Ctrl-C`. ### Job Completer Component Run the job completer component to update existing job documents in the `jobs` collection. This component picks the oldest outstanding job in the collection, sleeps for a while to pretend to perform the work required, then updates the job with a completion time and a randomly chosen staff ID for the staff member that completed the job. ```bash python job_completer.py ``` You can stop the job completer with `Ctrl-C`. To simulate constant workflow through the databases, run both the job creator and the job completer at the same time. ## Some Example SQL Queries Once you have data flowing from MongoDB to your CrateDB cluster, you can start to run some SQL queries. At the CrateDB console, try some of these example queries. ### How many jobs are outstanding? ```sql select count(*) from jobs as backlog where document['completedAt'] is null; ``` ### How many jobs have been completed? ```sql select count(*) from jobs as completed where document['completedAt'] is not null; ``` ### Outstanding jobs by type: ```sql select document['job'] as job_type, count(*) as backlog from jobs where document['completedAt'] is null group by job_type order by backlog desc ``` ### Average time to complete a job: ```sql select round(avg(document['completedAt'] - document['requestedAt']) / 1000) as job_avg_time from jobs where document['completedAt'] is not null ``` ### League table of who has completed the most jobs: ```sql select s.id, s.name, count(j.document) as jobs_completed from staff s join jobs j on s.id = j.document['completedBy'] where document['completedAt'] is not null group by s.id, s.name order by jobs_completed desc ``` ## Optional: Grafana Dashboard The file `grafana_dashboard.json` contains an export of a [Grafana](https://grafana.com/get/) dashboard that visualizes some of the above queries. To use this, you'll need to sign up for a free Grafana cloud instance and connect it to your CreateDB cloud cluster using a Posgres data source (see [Grafana documentation](https://grafana.com/docs/grafana/latest/datasources/postgres/)). ## CrateDB Academy Want to learn more about CrateDB? Take our free online "CrateDB Fundamentals" course, available now at the [CrateDB Academy](https://cratedb.com/academy/fundamentals/).(index)= (cloud-docs-index)= # CrateDB Cloud CrateDB Cloud is a fully managed, terabyte-scale, and cost-effective analytics database that lets you run analytics over vast amounts of data in near real time, even with complex queries. It is an SQL database service for enterprise data warehouse workloads, that works across clouds and scales with your data. :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slimmer :columns: 6 :::{rubric} Database Features ::: CrateDB Cloud helps you manage and analyze your data with procedures like machine learning, geospatial analysis, and business intelligence. CrateDB Cloud's scalable, distributed analysis engine lets you query terabytes worth of data efficiently. CrateDB provides a rich data model including container-, geospatial-, and vector-data types, and capabilities for full-text search. :::: ::::{grid-item} :class: rubric-slimmer :columns: 6 :::{rubric} Operational Benefits ::: CrateDB Cloud is a fully managed enterprise service, allowing you to deploy, monitor, back up, and scale your CrateDB clusters in the cloud without the need to do database management yourself. With CrateDB Cloud, there's no infrastructure to set up or manage, letting you focus on finding meaningful insights using plain SQL, and taking advantage of flexible pricing models across on-demand and flat-rate options. :::: ::::: :::{rubric} Learn ::: Users around the world rely on CrateDB Cloud clusters to store billions of records and terabytes of data, all accessible without delays. If you want to start using CrateDB Cloud, or make the most of your existing subscription, we are maintaining resources and tutorials to support you correspondingly. ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {octicon}`rocket` Quick Start :link: quick-start :link-type: ref Learn how to sign up and get started with a free cluster. ::: :::{grid-item-card} {octicon}`file-code` Import :link: cluster-import :link-type: ref Import your own data into your CrateDB Cloud cluster. ::: :::{grid-item-card} {octicon}`table` Console :link: cluster-console :link-type: ref Explore your data and execute SQL queries in the Console. ::: :::{grid-item-card} {octicon}`tools` Manage :link: cloud-howtos-index :link-type: ref Learn how to manage your cluster. ::: :::{grid-item-card} {octicon}`terminal` Croud CLI :link: cluster-deployment-croud :link-type: ref A command-line tool to operate your managed clusters. ::: :::: Do you want to learn about key database drivers and client applications for CrateDB, such as CrateDB Admin UI, crash, psql, DataGrip, and DBeaver? Discover how to configure these tools and explore CrateDB's compatibility with analytics, ETL, BI, and monitoring solutions. ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {material-outlined}`table_chart` Admin UI :link: crate-admin-ui:index :link-type: ref Each CrateDB Cloud cluster offers a dedicated Admin UI, which can be used to explore data, schema metadata, and cluster status information. ::: :::{grid-item-card} {material-outlined}`link` Clients, Tools, and Integrations :link: crate-clients-tools:index :link-type: ref Learn about compatible client applications and tools, and how to configure your favorite client library to connect to a CrateDB cluster. ::: :::: :::{toctree} :maxdepth: 1 :hidden: Quick Start Services Import Console Automation Integrations Export Backups Manage Cluster Billing Access Management Networking & Connectivity API How Tos Croud CLI Reference ::: [CrateDB]: https://crate.io/product/ [Croud CLI]: https://crate.io/docs/cloud/cli/ [How-To Guides]: https://crate.io/docs/cloud/en/latest/howtos/ [Reference]: https://crate.io/docs/cloud/en/latest/reference/(services)= # Services In the realm of CrateDB Cloud services, understanding your options is crucial for optimizing both performance and costs. This section of the documentation provides an in-depth look at the various service plans we offer, catering to a wide range of use-cases from small-scale applications to enterprise-level deployments. Our service plans are engineered for scalability, reliability, and performance. ::::{grid} 1 2 2 3 :margin: 0 0 0 0 :gutter: 01 :::{grid-item-card} Shared _non-critical workloads_ - Single Node - Up to 8 shared vCPUs - Up to 12 GiB RAM - Up to 1 TiB storage - Backups (once per day) - Single-AZ - Development support ::: :::{grid-item-card} Dedicated _production workloads_ - Up to 9 Nodes - Up to 144 vCPUs - Up to 495 GiB RAM - Up to 72 TiB storage - Backups (once per hour) - Multi-AZ - Basic Support --- - AWS / Azure Private Link - Uptime SLAs - Premium support available ::: :::{grid-item-card} Custom _large production workloads_ - Any cluster size - Custom compute options - Dedicated master nodes - Unlimited Storage - Custom Backups - Premium Support - AWS / Azure Private Link - Uptime SLAs
[Contact us for more information](https://cratedb.com/contact) ::: :::: ## Shared CrateDB Cloud's Shared Plan provides an affordable and easy-to-setup option for users who require basic database functionalities. The plan is built around the principle of cost-effectiveness and is particularly well-suited for development, testing, or non-critical production environments. **Node sizes** :::{table} :width: 700px :widths: 100, 100, 150, 150, 200 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Shared | CRFREE* | up to 2 | 2 GiB | 8 GiB | | Shared | S2 | up to 2 | 2 GiB | 8 GiB to 1TiB | | Shared | S4 | up to 3 | 4 GiB | 8 GiB to 1TiB | | Shared | S6 | up to 4 | 6 GiB | 8 GiB to 1TiB | | Shared | S12 | up to 8 | 12 GiB | 8 GiB to 1TiB | ::: **Variable Performance**
Since your cluster will be sharing vCPUs with other clusters, the performance may vary depending on the overall load on the underlying machine. This variability makes it less predictable compared to Dedicated Plans, where your database is running on dedicated resources. **Fair-Use Principle**
The Shared Plan operates on a fair-use principle. All users are expected to utilize the shared resources responsibly so that the system remains equitable and functional for everyone. :::{note} __*CRFREE__: This plan is aimed at new users who want to test and evaluate CrateDB Cloud and is perpetually free to use. Every user can deploy one free tier cluster in their organization without adding a payment method. Free tier clusters will be suspended if they are not used for 4 days, and deleted after 10 more days of inactivity. They cannot be scaled or changed. ::: ## Dedicated CrateDB Cloud's Dedicated Plan is designed to provide robust, scalable, and high-performance database solutions. Unlike the Shared Plan, the Dedicated Plan offers dedicated resources, including dedicated vCPUs, to meet the demands of high-availability and high-throughput environments. **Node sizes** :::{table} :width: 700px :widths: 200, 100, 100, 100, 200 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Dedicated | CR1 | 2 | 7 GiB | 32 GiB to 8 TiB | | Dedicated | CR2 | 4 | 14 GiB | 32 GiB to 8 TiB | | Dedicated | CR3 | 8 | 28 GiB | 32 GiB to 8 TiB | | Dedicated | CR4 | 16 | 55 GiB | 32 GiB to 8 TiB | ::: All Dedicated Plans can be scaled from 1 to 9 nodes. Depeding on the number of nodes, the overall cluster size can be scaled up to the following limits: **Cluster sizes** :::{table} :width: 700px :widths: 200, 100, 100, 150, 150 :align: left | Plan | Size | vCPUs | RAM | Storage | |----|--------|-----------|----------| ---- | | Dedicated | CR1 | up to 18 | up to 63 GiB | up to 72 TiB | | Dedicated | CR2 | up to 36 | up to 126 GiB | up to 72 TiB | | Dedicated | CR3 | up to 72 | up to 252 GiB | up to 72 TiB | | Dedicated | CR4 | up to 144 | up to 495 GiB | up to 72 TiB | ::: **High Availability**
While it’s possible to start with just one node, for applications requiring high availability and fault tolerance, we recommend using at least three nodes. This ensures data replication and allows the cluster to handle node failures gracefully. Dedicated nodes are automatically deployed across three availability zones, and the specific zone for each node cannot be manually configured. A single dedicated node is placed in one zone, two nodes are distributed across two zones, and three or more nodes utilize all three availability zones, with nodes distributed as uniformly as possible. While a node count that is a multiple of three (e.g., 3, 6, 9, 12, etc.) provides optimal distribution across zones, it is not strictly required for high availability. ## Custom For organizations with specialized requirements that go beyond the Shared and Dedicated Plans, CrateDB Cloud offers custom solutions tailored to your specific needs. Our sales team and solutions engineers work closely with you to architect and deploy a custom cluster configuration, ensuring optimal performance, scalability, and security for your mission-critical applications. Whether you have stringent compliance mandates, complex integrations, or unique scalability challenges, our custom solutions provide the flexibility and expertise to meet your business objectives and technical requirements.
(organization-billing)= # Billing CrateDB Cloud offers flexible billing options to accommodate various needs and preferences. We only bill for actual usage of services, meaning there are no flat fees or minimum payments. ## Billing Information In the Billing tab under the Organization overview, you can add and edit your billing information, including your company address, country of residence, VAT info, invoice email, phone contacts, and more. You need to fill out this information whenever you use a paid offer on CrateDB Cloud, regardless of the payment method. ## Subscriptions After adding your billing information, you can add a subscription (payment method). We currently support the following payment methods: - **Cloud Marketplaces**: Available on [AWS](https://aws.amazon.com/marketplace/pp/prodview-l7rqf2xpeaubk), [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/crate.cratedbcloud), and [GCP](https://console.cloud.google.com/marketplace/product/cratedb-public/cratedb-gcp) Marketplaces. - **Credit Card**: Available worldwide. We use Stripe as our payment provider. - **Bank Transfer**: Available in the EU. We use Stripe as our payment provider. - **Custom Contract**: For large individual deployments, [contact](https://cratedb.com/contact) our sales team. :::{tip} **Marketplace Committed Spend:** All three marketplace offerings (AWS, Azure, GCP) can be applied towards any committed spend agreement (e.g., MACC) you have with the cloud provider. This effectively reduces your committed spend balance and allows you to use CrateDB as if it were a native service provided by the cloud provider. ::: ### Setup New Subscription ::::{tab} Cloud Marketplaces
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add Payment Method"**. 5. Select your preferred cloud marketplace. 6. Follow the instructions to sign up for **CrateDB Cloud** through the selected marketplace. 7. After completing the subscription in the marketplace, you will be redirected to CrateDB Cloud. 8. Connect one of your organizations to the created marketplace SaaS subscription. **You are now ready to deploy a cluster.** When you deploy a cluster, the usage will be reported regularly to the marketplace as usage amount in USD, and it will appear on your regular cloud provider's invoice. Depending on your settlement currency, a conversion may be applied. :::: ::::{tab} Credit Card
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add susbscription"**. 5. Click on **"Pay with credit or debit card"**. 6. Fill out the required information and click **"Save"**. **You are now ready to deploy a cluster.** The payment cycle is monthly and aligns with the calendar month. You will be charged for the previous period's usage and will receive an invoice at the email address you provided. If needed, you can add a new credit card to replace the current one. :::: ::::{tab} Bank Transfer
1. Register for an account on the [CrateDB Cloud sign-in page](https://console.cratedb.cloud/). 2. Navigate to the **"Billing"** tab on the right side. 3. Add your billing information. 4. Click on **"Add Payment Method"**. 5. Click on **"Ask to enable"** next to **"Pay via Bank Transfer"**. Complete the form, and our team will contact you to process your request. **After** your request has been approved: 1. Click on **"Add Payment Method"**. 2. Click on **"Pay via Bank Transfer"**. **You are now ready to deploy a cluster.** The payment cycle is monthly and aligns with the calendar month. You will be charged for the previous period's usage, and an invoice will be sent to the email address you provided. Payment is due within the specified terms. :::{caution} **Bank transfer payment is currently available only within the EU.** These payments are processed by Stripe and invoiced in Euros at a fixed USD/EUR exchange rate. You can find the current exchange rate within the CrateDB Cloud Console after setting up the bank transfer payment method. ::: :::: ::::{tab} Custom Contract
Custom contracts are individually tailored to your needs. Please [contact](https://cratedb.com/contact) our sales team to set up a custom contract that fits your requirements. :::: :::{note} To remove a subscription, please contact support. ::: ## Usage Reporting Whenever you use a paid offer in CrateDB Cloud, we collect your usage information, including the cluster compute and storage size and the number of nodes deployed. You can view this usage in the CrateDB Cloud Console, where you'll find a usage snapshot for the current calendar month, the current cost for the deployed service, and any available credits that might be applied to the current usage period. Be aware that the billing period might deviate from the shown calendar month usage. ## Credits Credits are another way to pay for CrateDB Cloud services and can be used together with other payment methods. Credits are applied to your account and are used up first before any other payment method is charged. This means that if credits are available, you will not be charged, nor will any usage be reported to your payment provider. You can see the remaining credits and their validity date on the Billing and Usage page. There is also the option to purchase more credits at a discount by [contacting](https://cratedb.com/contact) our sales team. :::{tip} **Free Trial Credits**: If you just signed up, you have the option to enable $200 of credits that can be used for any paid cluster. To enable these credits, you need to provide a valid payment method, which will only be charged if you have used up your credits. :::
(organization-api)= # API We offer an API to allow programmatic access to CrateDB Cloud. The API can be accessed by generating a key and secret in your [account page](https://console.cratedb.cloud/account/settings) (login required). The API keys are bound to the CrateDB Cloud user account that generates them. This means that any actions performed using the API keys will be executed as that user. Consequently, the API keys inherit the same permissions as the user, allowing the same level of access and control over the organization and its resources. ![Cloud Console New Api Key](../_assets/img/create-api-key.png) Click the *Generate new key* button to create your key. A popup with your key and secret will appear. Make sure to store your secret safely, as you cannot access it again. (api-access)= ## Access The key and secret can be used as HTTP Basic Auth credentials when calling the API, e.g. :::{code} console sh$ $ curl -s -u $your_key:$your_secret https://console.cratedb.cloud/api/v2/users/me ::: This example will return details of the current user: :::{code} console {"email":"some@example.com","hmac":"...","is_superuser":false,"name":"Some User","organization_id":"123","status":"active","uid":"uid","username":"some@example.com"} ::: (api-examples)= ## Examples The API is documented with [Swagger](https://console.cratedb.cloud/api/docs) (login required). It contains endpoints for: - Organizations - Regions - Projects - Clusters - Products - Users - Roles - Subscriptions - Audit logs It provides example requests with all the required parameters, expected responses, and all response codes. Access the API documentation [here](https://console.cratedb.cloud/api/docs) (login required).(cluster-import)= # Import The first thing you see in the "Import" tab is the history of your imports. You can see whether you imported from a URL or from a file, file name, table into which you imported, date, and status. By clicking "Show details" you can display details of a particular import. Clicking the "Import new data" button will bring up the Import page, where you can select the source of your data. If you don't have a dataset prepared, we also provide an example in the URL import section. It's the New York City taxi trip dataset for July of 2019 (about 6.3M records). (cluster-import-url)= ## Import from URL To import data, fill out the URL, name of the table which will be created and populated with your data, data format, and whether it is compressed. If a table with the chosen name doesn't exist, it will be automatically created. The following data formats are supported: - CSV (all variants) - JSON (JSON-Lines, JSON Arrays and JSON Documents) - Parquet Gzip compressed files are also supported. ![Cloud Console cluster upload from URL](../_assets/img/cluster-import-tab-url.png) (cluster-import-s3)= ## Import from private S3 bucket CrateDB Cloud allows convenient imports directly from S3-compatible storage. To import a file form bucket, provide the name of your bucket, and path to the file. The S3 Access Key ID, and S3 Secret Access Key are also needed. You can also specify the endpoint for non-AWS S3 buckets. Keep in mind that you may be charged for egress traffic, depending on your provider. There is also a volume limit of 10 GiB per file for S3 imports. The usual file formats are supported - CSV (all variants), JSON (JSON-Lines, JSON Arrays and JSON Documents), and Parquet. ![Cloud Console cluster upload from S3](../_assets/img/cluster-import-tab-s3.png) :::{note} It's important to make sure that you have the right permissions to access objects in the specified bucket. For AWS S3, your user should have a policy that allows GetObject access, for example: :::{code} { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowGetObject", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME/*" }] } ::: ::: (cluster-import-azure)= ## Import from Azure Blob Storage Container Importing data from private Azure Blob Storage containers is possible using a stored secret, which includes a secret name and either an Azure Storage Connection string or an Azure SAS Token URL. An admin user at the organization level can add this secret. You can specify a secret, a container, a table and a path in the form [/folder/my_file.parquet] As with other imports Parquet, CSV, and JSON files are supported. File size limitation for imports is 10 GiB per file. ![Cloud Console cluster upload from Azure Storage Container](../_assets/img/cluster-import-tab-azure.png) (cluster-import-globbing)= ## Importing multiple files Importing multiple files, also known as import globbing is supported in any s3-compatible blob storage. The steps are the same as if importing from S3, i.e. bucket name, path to the file and S3 ID/Secret. Importing multiple files from Azure Container/Blob Storage is also supported: `/folder/*.parquet` Files to be imported are specified by using the well-known [wildcard](https://en.wikipedia.org/wiki/Wildcard_character) notation, also known as "globbing". In computer programming, [glob](https://en.wikipedia.org/wiki/Glob_(programming)) patterns specify sets of filenames with wildcard characters. The following example would import all the files from the single specified day. :::{code} console /somepath/AWSLogs/123456678899/CloudTrail/us-east-1/2023/11/12/*.json.gz ::: ![Cloud Console cluster import globbing](../_assets/img/cluster-import-globbing.png) As with other imports, the supported file types are CSV, JSON, and Parquet. (cluster-import-file)= ## Import from file Uploading directly from your computer offers more control over your data. From the security point of view, you don't have to share the data on the internet just to be able to import it to your cluster. You also have more control over who has access to your data. Your files are temporarily uploaded to a secure location managed by Crate (an S3 bucket in AWS) which is not publicly accessible. The files are automatically deleted after 3 days. You may re-import the same file into multiple tables without having to re-upload it within those 3 days. Up to 5 files may be uploaded at the same time, with the oldest ones being automatically deleted if you upload more. ![Cloud Console cluster upload from file](../_assets/img/cluster-import-tab-file.png) As with other import, the supported file formats are: - CSV (all variants) - JSON (JSON-Lines, JSON Arrays and JSON Documents) - Parquet There is also a limit to file size, currently 1GB. (overview-cluster-import-schema-evolution)= ## Schema evolution Schema Evolution, available for all import types, enables automatic addition of new columns to existing tables during data import, eliminating the need to pre-define table schemas. This feature is applicable to both pre-existing tables and those created during the import process. It can be toggled via the 'Schema Evolution' checkbox on the import page. Note that Schema Evolution is limited to adding new columns; it does not modify existing ones. For instance, if an existing table has an 'OrderID' column of type **INTEGER**, and an import is attempted with Schema Evolution enabled for data where 'OrderID' column is of type **STRING**, the import job will fail due to type mismatch. ## File Format Limitations **CSV** files: 1. Comma, tab and pipe delimiters are supported. **JSON** files: The following formats are supported for JSON: 1. JSON Documents. Will insert as a single row in the table. :::{code} console { "id":1, "text": "example" } ::: 2. JSON Arrays. Will insert as a row per array item. :::{code} console [ { "id":1, "text": "example" }, { "id":2, "text": "example2" } ] ::: 3. JSON-Lines. Each line will insert as a row. :::{code} console {"id":1, "text": "example"} {"id":2, "text": "example2"} :::(cluster-export)= # Export The "Export" section allows users to download specific tables/views. When you first visit the Export tab, you can specify the name of a table/view, format (CSV, JSON, or Parquet) and whether you'd like your data to be gzip compressed (recommended for CSV and JSON files). :::{important} - Size limit for exporting is 1 GiB - Exports are held for 3 days, then automatically deleted ::: :::{note} **Limitations with Parquet**: Parquet is a highly compressed data format for very efficient storage of tabular data. Please note that for OBJECT and ARRAY columns in CrateDB, the exported data will be JSON encoded when saving to Parquet (effectively saving them as strings). This is due to the complexity of encoding structs and lists in the Parquet format, where determining the exact schema might not be possible. When re-importing such a Parquet file, make sure you pre-create the table with the correct schema. ::: (cluster-backups)= # Backups You can find the Backups page in the detailed view of your cluster and you can see and restore all existing backups here. By default, a backup is made every hour. The backups are kept for 14 days. We also keep the last 14 backups indefinitely, no matter the state of your cluster. The Backups tab provides a list of all your backups. By default, a backup is made every hour. ![Cloud Console cluster backups page](../_assets/img/cluster-backups.png) You can also control the schedule of your backups by clicking the *Edit backup schedule* button. ![Cloud Console cluster backups edit page](../_assets/img/cluster-backups-edit.png) Here you can create a custom schedule by selecting any number of hour slots. Backups will be created at selected times. At least one backup a day is mandatory. To restore a particular backup, click the *Restore* button. A popup window with a SQL statement will appear. Input this statement to your Admin UI console either by copy-pasting it, or clicking the *Run query in Admin UI*. The latter will bring you directly to the Admin UI console with the statement automatically pre-filled. ![Cloud Console cluster backups restore page](../_assets/img/cluster-backups-restore.png) You have a choice between restoring the cluster fully, or only specific tables. (cluster-cloning)= ## Cluster Cloning Cluster cloning is a process of duplicating all the data from a specific snapshot into a different cluster. Creating the new cluster isn't part of the cloning process, you need to create the target cluster yourself. You can clone a cluster from the Backups page. ![Cloud Console cluster backup snapshots](../_assets/img/cluster-backups.png) Choose a snapshot and click the *Clone* button. As with restoring a backup, you can choose between cloning the whole cluster, or only specific tables. ![Cloud Console cluster clone popup](../_assets/img/cluster-clone-popup.png) :::{note} Keep in mind that the full cluster clone will include users, views, privileges and everything else. Cloning also doesn't distinguish between cluster plans, meaning you can clone from CR2 to CR1 or any other variation. ::: (cluster-cloning-fail)= ## Failed cloning There are circumstances under which cloning can fail or behave unexpectedly. These are: - If you already have tables with the same names in the target cluster as in the source snapshot, the entire clone operation will fail. - There isn't enough storage left on the target cluster to accommodate the tables you're trying to clone. In this case, you might get an incomplete cloning as the cluster will run out of storage. - You're trying to clone an invalid or no longer existing snapshot. This can happen if you're cloning through [Croud](https://cratedb.com/docs/cloud/cli/en/latest/). In this case, the cloning will fail. - You're trying to restore a table that is not included in the snapshot. This can happen if you're restoring snapshots through [Croud](https://cratedb.com/docs/cloud/cli/en/latest/). In this case, the cloning will fail. When cloning fails, it is indicated by a banner in the cluster overview screen. ![Cloud Console cluster failed cloning](../_assets/img/cluster-clone-failed.png)(integrations-mongo-cdc)= # MongoDB CDC (Preview) CrateDB Cloud enables continuous data ingestion from MongoDB using Change Data Capture (CDC), providing seamless, real-time synchronization of your data. :::{caution} This integration is currently in preview and may have restricted availability. For more information, please [contact us](https://cratedb.com/contact). ::: ## Key Concepts The MongoDB CDC integration in CrateDB Cloud allows you to keep your data synchronized between your MongoDB Atlas cluster and your CrateDB Cloud cluster in real-time. ### How It Works The integration functions in two main stages: 1. **Initial Sync:** The integration performs a complete scan of your MongoDB collections, importing all existing data into your CrateDB Cloud cluster. 2. **Continuous Sync:** The integration uses MongoDB Change Streams to monitor changes in your MongoDB collections and syncs these updates to your CrateDB Cloud cluster in real-time, ensuring that your data remains current. ### Data Consistency and Mode For continuous sync, CrateDB Cloud uses MongoDB's **full document mode** to ensure data consistency. This mode guarantees that MongoDB returns the latest majority-committed version of the updated document. While receiving partial deltas is more efficient, full document mode provides robust functionality. Support for partial deltas may be added in the future to enhance performance and flexibility. --- ## Create a new Integration A MongoDB integration allows you to sync a single collection from a MongoDB Atlas cluster. You can reuse an existing connection across multiple integrations to continuously sync data from multiple MongoDB Atlas collections. Supported authentication methods: - MongoDB SCRAM Authentication - MongoDB X.509 Authentication ### Set Up MongoDB Atlas Authentication The following steps should be performed in the MongoDB Atlas UI. #### Step 1: Create a Custom Role 1. **Navigate to Database Access** Go to **Database Access** in the MongoDB Atlas UI for the cluster you want to connect to CrateDB Cloud. 2. **Add a Custom Role** Under **Custom Roles**, click **Add New Custom Role**. 3. **Set Up Read-Only Access** Assign the following actions or roles to the custom role: - `find` - `changeStream` - `collStats` Specify the databases and collections for these actions. You can update access permissions in the MongoDB Atlas UI later if needed. #### Step 2: Create a User Depending on whether you plan to use SCRAM or X.509 authentication, create a database user with one of the following methods: :::{tab} SCRAM Auhentication 1. **Navigate to Database Access** In the MongoDB Atlas UI, go to **Database Access** and click **Add New Database User**. 2. **Set Authentication Method** Choose **Password** as the authentication method and enter a username and password for the database user. 3. **Assign the Role** Under **Database User Privileges**, select the custom role created in Step 1. 4. **Copy User Credentials** Click **Add User**, and make sure to record the username and password. These credentials will be used later in the CrateDB Cloud Console. ::: :::{tab} x.509 Authentication 1. **Navigate to Database Access** In the MongoDB Atlas UI, go to **Database Access** and click **Add New Database User**. 2. **Set Authentication Method** Choose **Certificate** as the authentication method. 3. **Assign the Role** Under **Database User Privileges**, select the custom role created in Step 1. 4. **Save the Certificate** Click **Add User**, and store the certificate securely. This will be required later in the CrateDB Cloud Console. ::: #### Step 3: Configure IP Access To allow CrateDB Cloud to access your MongoDB Atlas cluster, you must add the CrateDB Cloud IP addresses to the IP Access List in MongoDB Atlas. 1. **Navigate to Network Access** In the MongoDB Atlas UI, go to **Network Access** from the left navigation. 2. **Add IP Address** Click **Add IP Address** and choose an IP address or range to allow access. For testing purposes, you can select **Allow Access from Anywhere**, but for production, it is recommended to specify only the required IPs. :::{note} The specific IP addresses depend on the region of your CrateDB Cloud cluster. These IP addresses can also be found in the **Connection Details** section of the CrateDB Cloud Console, just before you click **Test Connection** during the setup process. **Outbound IP Addresses**: | Cloud Provider | Region | IP Addresses | |----------------|---------------|---------------------------------| | Azure | East US 2 | `52.184.241.228/32`, `52.254.31.90/32` | | Azure | West Europe | `51.105.153.175/32`, `108.142.34.5/32` | | AWS | EU West 1 | `34.255.75.224` | | AWS | US East 1 | `54.197.229.58` | | AWS | US West 2 | `54.189.16.20` | | GCP | US Central 1 | `34.69.134.49` | ::: :::{note} To set up a PrivateLink connection for the Mongo CDC integration, please reach out to our support team. ::: #### Step 4: Access Connection String You'll need to provide the connection string for your MongoDB Atlas cluster so that CrateDB Cloud can connect to it. 1. **Navigate to Your Cluster** In the MongoDB Atlas UI, navigate to the cluster you want to connect to CrateDB Cloud. 2. **Click "Connect"** From the cluster view, click on **Connect**. 3. **Select "Connect Your Application"** Choose **Connect your application** as the connection method. 4. **Copy the Connection String** Copy the connection string provided in the MongoDB Atlas UI. It will look like this: ``` mongodb+srv://:@/?retryWrites=true&w=majority ``` --- :::{note} If you are using X.509 authentication, the connection string will look slightly different and will not include a username and password. Instead, it will reference the certificate file: ``` mongodb+srv:///?authMechanism=MONGODB-X509&retryWrites=true&w=majority ``` Make sure to upload the X.509 certificate file when configuring the connection in CrateDB Cloud. ::: ### Set Up Integration in CrateDB Cloud Follow these steps in the CrateDB Cloud Console to set up the MongoDB CDC integration: #### Step 1: Create an Integration 1. Navigate to the **Import** section in the CrateDB Cloud Console. 2. Click **Create Integration** and select **MongoDB** as the source type. #### Step 2: Configure Connection 1. Choose **Create New Connection** or select an existing one. 2. Fill in the following details: :::{tab} SCRAM Auhentication - **Connection Name**: Provide a unique name for the connection. - **Connection String**: Paste the connection string from MongoDB Atlas. - **Username**: Enter the database username (required for SCRAM). - **Password**: Enter the database password (required for SCRAM). - **Default Database**: Specify the default database to use for this connection. ::: :::{tab} X.509 Auhentication - **Connection Name**: Provide a unique name for the connection. - **Connection String**: Paste the connection string from MongoDB Atlas. - **Certificate**: Upload the X.509 certificate file. - **Default Database**: Specify the default database to use for this connection. ::: #### Step 3: Test the Connection Click **Test Connection** to verify CrateDB Cloud can connect to your MongoDB Atlas cluster. Resolve any issues if the test fails. #### Step 4: Select Collection Enter the database and collection name from your MongoDB Atlas cluster, that you want to sync with CrateDB Cloud. #### Step 5: Select Target Table 1. Specify the target table in your CrateDB Cloud cluster where the data will be synced. 2. MongoDB records will be inserted into an object column called `document`. 3. Select the object type for the column: - **`dynamic`**: Allows indexing and columnar storage for faster querying. - **`ignored`**: Prevents type conflicts in CrateDB if your source data lacks a strict schema. :::{note} If your source data doesn't follow a strict schema, select `ignored` to avoid type conflicts. However, selecting `dynamic` provides faster query performance by utilizing indexes and columnar storage. ::: #### Step 6: Configure Integration Settings 1. Enter a name for the integration. 2. Select the integration mode: - **Full Load Only**: Imports the data once but doesn’t sync changes. - **Full Load and CDC**: Imports the data and syncs changes in real-time. - **CDC Only**: Syncs only new changes in real-time without importing existing data. #### Step 7: Create the Integration Click **Create Integration** to finalize the setup. CrateDB Cloud will now sync your MongoDB data based on the selected settings. --- ## Limitations The MongoDB CDC integration is available as a preview. The feature is stable enough for broader use but may still have limitations, known issues, or incomplete features. While suitable for many use cases, it is not yet recommended for mission-critical workloads. ### Column Name Restrictions Column or property names containing square brackets (`[]`) are not supported and are replaced with `__openbrk__` and `__closebrk__` respectively. Likewise, column names containing dots (`.`) are not supported and are replaced with (`__dot__`). :::{warning} This behavior may change in future releases. ::: ### Unsupported Data Types The following MongoDB data types are not supported in the CrateDB Cloud MongoDB CDC integration: - **Long Strings** exceeding 32,766 characters are replaced with a placeholder value. - **Binary data types** other than UUIDs, which are converted to `TEXT` and **vectors**, which are converted to `ARRAY`s of numbers. - The `Decimal128` data type is not supported and is converted to a string, as CrateDB does not support a decimal data type.(feature)= (features)= (all-features)= # All Features :::{include} /_include/styles.html ::: All features of CrateDB at a glance. :::::{grid} 1 3 3 3 :margin: 4 4 0 0 :padding: 0 :gutter: 2 ::::{grid-item-card} {material-outlined}`lightbulb;2em` Functional :::{toctree} :maxdepth: 1 sql/index connectivity/index document/index relational/index Search: FTS, Geo, Vector, Hybrid blob/index ::: +++ CrateDB combines the power of Lucene with the advantages of industry-standard SQL. :::: ::::{grid-item-card} {material-outlined}`group;2em` Operational :::{toctree} :maxdepth: 1 cluster/index snapshot/index cloud/index storage/index index/index ::: +++ CrateDB scales horizontally using a shared-nothing architecture, inherited from Elasticsearch. :::: ::::{grid-item-card} {material-outlined}`read_more;2em` Advanced :::{toctree} :maxdepth: 1 query/index generated/index cursor/index fdw/index udf/index ccr/index ::: +++ Advanced features supporting daily data operations, all based on standard SQL. :::: ::::: :::{rubric} Connect and Integrate ::: Connect to CrateDB using traditional database drivers, and integrate CrateDB with popular 3rd-party applications in open-source and proprietary software landscapes. ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :padding: 0 :gutter: 2 :::{grid-item-card} {material-outlined}`link;2em` Connectivity :link: connectivity :link-type: ref :link-alt: About connection options with CrateDB Connect to your CrateDB cluster using drivers, frameworks, and adapters. +++ **What's inside:** Connectivity and integration options with database drivers and applications, libraries, and frameworks. ::: :::{grid-item-card} {material-outlined}`sync;2em` Import and Export :link: import-export :link-type: ref :link-alt: About time series data import and export Import data into and export data from your CrateDB cluster. +++ **What's inside:** A variety of options to connect and integrate with 3rd-party ETL applications. ::: :::: :::{rubric} Highlights ::: Important fundamental features of CrateDB, and how they are applied within software solutions and application platforms in different scenarios and environments. ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :padding: 0 :gutter: 2 :::{grid-item-card} {material-outlined}`description;2em` Document Store :link: document :link-type: ref :link-alt: About CrateDB's OBJECT data type Learn about CrateDB's OBJECT data type, how to efficiently store JSON or other structured data, also nested, and how to query this data with ease, fully indexed thus performant from the start, optionally using relational joins. +++ **What's inside:** CrateDB can do the same like Lotus Notes / Domino, CouchDB, MongoDB, and PostgreSQL's JSON data type. ::: :::{grid-item-card} {material-outlined}`manage_search;2em` Full-Text Search :link: fulltext :link-type: ref :link-alt: About CrateDB's full-text search capabilities Learn about CrateDB's Okapi BM25 implementation, how to set up your database for full-text search, and how to query text data efficiently, to make sense of large volumes of unstructured information. +++ **What's inside:** Like Elasticsearch and Solr, CrateDB is based on Lucene, the premier industry-grade full-text search engine library. ::: :::: :::{rubric} Quotes ::: > When using CrateDB, a project that got started around the same time, it's like you've stumbled into an alternative reality where Elastic is a proper database. > > -– Henrik Ingo, Nyrkiö Oy, independent database consultant, MongoDB > CrateDB enables use cases we couldn't satisfy with other database systems, also with databases which are even stronger focused on the time series domain. > > CrateDB is not your normal database! > > -- Daniel Hölbling-Inzko, Director of Engineering Analytics, Bitmovin(sql)= # SQL :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slimmer :columns: auto 9 9 9 :::{rubric} Overview ::: CrateDB's features are available using plain SQL, and it is wire-protocol compatible to PostgreSQL. :::{rubric} About ::: SQL is the most widely used language for querying data and is the natural choice for people in many roles working with data in databases. CrateDB extends industry-standard SQL with functionalities to support its data types, data I/O procedures, and cluster management. :::{rubric} Details ::: CrateDB integrates well with commodity systems using standard database access interfaces like ODBC or JDBC, and it provides a proprietary HTTP interface on top. You have a variety of options to connect to CrateDB, and to integrate it with off-the-shelve, 3rd-party, open-source, and proprietary applications. Interfacing with your data in standard SQL syntax unlocks manifold integration capabilities instead of resorting to specialized query languages or DSLs like Query DSL (Elasticsearch), the MongoDB Query Language, Flux (InfluxDB), or PromQL (Prometheus). :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:sql` :::{rubric} Related ::: - {ref}`connect` - {ref}`query` :::{rubric} Product ::: - [Relational Database] - [Multi-model Database] - [JSON Database] - [Dynamic Database Schemas] - [Nested Data Structure] {tags-primary}`Query Language` {tags-primary}`Standard Interface` \ {tags-secondary}`SQL` {tags-secondary}`ODBC` {tags-secondary}`JDBC` :::: ::::: ## Synopsis Use scalar functions, sub-selects, and windowing, specifically illustrating the DATE_BIN function for resampling time series data using DATE_BIN, also known as grouping rows into time buckets, aka. time bucketing. ```sql SELECT ts_bin, battery_level, battery_status, battery_temperature FROM ( SELECT DATE_BIN('5 minutes'::INTERVAL, "time", 0) AS ts_bin, battery_level, battery_status, battery_temperature, ROW_NUMBER() OVER (PARTITION BY DATE_BIN('5 minutes'::INTERVAL, "time", 0) ORDER BY "time" DESC) AS "row_number" FROM doc.sensor_readings ) x WHERE "row_number" = 1 ORDER BY 1 ASC ``` ## Learn Please inspect more advanced SQL capabilities on the [](#query) page, and read about [](#features) in general. :::{seealso} **Domains:** [](#metrics-store) • [](#analytics) • [](#industrial) • [](#timeseries) • [](#machine-learning) :::(connect)= (connectivity)= # Connectivity :::{include} /_include/links.md ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slimmer :columns: auto 9 9 9 :::{rubric} Overview ::: CrateDB connectivity options at a glance. You have a variety of options to connect to CrateDB, and to integrate it with off-the-shelve, 3rd-party, open-source, and proprietary applications. :::{rubric} About ::: CrateDB supports both the HTTP protocol and the PostgreSQL wire protocol, which ensures that many clients that work with PostgreSQL, will also work with CrateDB. Through corresponding drivers, CrateDB is compatible with JDBC, ODBC, and other database API specifications. By supporting SQL, CrateDB is compatible with many standard database environments out of the box. :::{rubric} Details ::: CrateDB provides plenty of connectivity options with database drivers, applications, and frameworks, in order to get time series data in and out of CrateDB, and to connect to other applications. To learn more, please refer to the documentation sections and hands-on tutorials about supported client drivers, libraries, and frameworks, and how to configure and use them with CrateDB optimally. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [HTTP interface] - [PostgreSQL interface] - [SQL query syntax] ```{rubric} Protocols and API Standards ``` - [HTTP protocol] - [PostgreSQL wire protocol] - [JDBC] - [ODBC] - [SQL] :::: ::::: ## Synopsis In order to provide a CrateDB instance for testing purposes, use, for example, Docker. ```shell docker run --rm -it --publish=4200:4200 --publish=5432:5432 crate/crate:nightly ``` :::{tip} The [CrateDB Examples] repository also includes a collection of clear and concise examples how to connect to and work with CrateDB, using different environments, applications, or frameworks. ::: ## Learn ::::{grid} 2 3 3 3 :padding: 0 :::{grid-item-card} Ecosystem Catalog :link: catalog :link-type: ref :link-alt: Ecosystem Catalog :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`category;3.5em` +++ Explore all open-source and partner applications and solutions. ::: :::{grid-item-card} Driver Support :link: crate-clients-tools:connect :link-type: ref :link-alt: Driver Support :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`link;3.5em` +++ List of HTTP and PostgreSQL client drivers, and tutorials. ::: :::{grid-item-card} Integration Tutorials :link: integrate :link-type: ref :link-alt: Integration Tutorials :padding: 3 :class-card: sd-pt-3 :class-title: sd-fs-5 :class-body: sd-text-center :class-footer: text-smaller {material-outlined}`local_library;3.5em` +++ Learn how to integrate CrateDB with applications and tools. ::: :::: [CrateDB Examples]: https://github.com/crate/cratedb-examples [Driver Support]: inv:crate-clients-tools:*:label#connect [Ecosystem Catalog]: inv:crate-clients-tools:*:label#index [HTTP interface]: inv:crate-reference:*:label#interface-http [HTTP protocol]: https://en.wikipedia.org/wiki/HTTP [JDBC]: https://en.wikipedia.org/wiki/Java_Database_Connectivity [ODBC]: https://en.wikipedia.org/wiki/Open_Database_Connectivity [PostgreSQL interface]: inv:crate-reference:*:label#interface-postgresql [PostgreSQL wire protocol]: https://www.postgresql.org/docs/current/protocol.html [SQL]: https://en.wikipedia.org/wiki/Sql [SQL query syntax]: inv:crate-reference:*:label#sql ```{include} /_include/styles.html ```(document)= (object)= # Document Store :::{include} /_include/links.md ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slimmer :columns: auto 9 9 9 :::{rubric} Overview ::: Learn how to efficiently store JSON documents or other structured data, also nested, using CrateDB's OBJECT and ARRAY container data types, and how to query this data with ease. CrateDB combines the advantages of typical SQL databases and strict schemas with the dynamic properties of NoSQL databases. While traditional object-relational databases allow you to store and process JSON data only opaquely, CrateDB handles objects as first-level citizens. :::{rubric} About ::: This feature allows users to access object properties in the same manner as columns in a table, including {ref}`full-text indexing ` and {ref}`aggregation ` capabilities. Even when using dynamic objects, i.e. when working without a strict object schema, all attributes are indexed by default, and can be queried efficiently. Storing documents in CrateDB provides the same convenience like the document-oriented storage layers of Lotus Notes / Domino, CouchDB, MongoDB, or PostgreSQL's JSON(B) types. :::{rubric} Details ::: CrateDB uses Lucene as a storage layer, so it inherits its concepts about storage entities and units, in the same spirit as Elasticsearch. :Document: In Lucene, the **Document** is a fundamental entity, as it is the unit of search and index. An index consists of one or more Documents. :Field: A Document consists of one or more Fields. A Field is simply a name-value pair. While Elasticsearch uses a [query DSL based on JSON], in CrateDB, you can work with Lucene Documents using SQL. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - {ref}`crate-reference:data-types-container` - [Querying containers](inv:crate-reference#sql_dql_container) - {ref}`crate-reference:scalar-objects` - {ref}`crate-reference:scalar-arrays` - {ref}`crate-reference:sql_dql_array_comparisons` - [Non-existing keys](inv:crate-reference#conf-session-error_on_unknown_object_key) ```{rubric} Related ``` - {ref}`sql` - {ref}`connect` - {ref}`fulltext` - {ref}`query` - {ref}`geospatial` - {ref}`machine-learning` - {ref}`analytics` {tags-primary}`JSON` {tags-primary}`Container` {tags-primary}`Document` {tags-primary}`Object` {tags-primary}`Array` {tags-primary}`Nested` {tags-primary}`Indexed` :::: ::::: ## Synopsis Store and query structured data, in this case blending capabilities of InfluxDB and MongoDB, but with much more headroom for other features, and using SQL instead of proprietary query languages. ::::{grid} :padding: 0 :class-row: title-slim :::{grid-item} :columns: auto 4 4 4 **DDL** ```sql CREATE TABLE reading ( "timestamp" TIMESTAMP, "tags" OBJECT(DYNAMIC), "fields" OBJECT(DYNAMIC) ); ``` ::: :::{grid-item} :columns: auto 4 4 4 **DML** ```text INSERT INTO reading ( timestamp, tags, fields ) VALUES ( '2024-03-02T23:42:42', { "sensor_id" = '4834CC52', "site_id" = 23 }, { "temperature" = 42.42, "humidity" = 84.84 } ); ``` ::: :::{grid-item} :columns: auto 4 4 4 **DQL** ```sql SELECT * FROM reading WHERE tags['sensor_id'] = '4834CC52'; ``` ::: :::: :::{div} no-margin **Result** ```text +---------------+------------------------------------------+-------------------------------------------+ | timestamp | tags | fields | +---------------+------------------------------------------+-------------------------------------------+ | 1709422962000 | {"sensor_id": "4834CC52", "site_id": 23} | {"humidity": 84.84, "temperature": 42.42} | +---------------+------------------------------------------+-------------------------------------------+ SELECT 1 row in set (0.058 sec) ``` ::: ## Usage Working with structured data and container data types in CrateDB. ```{rubric} Object Column Strictness ``` For columns of type OBJECT, CrateDB supports different policies about the behaviour with undefined attributes, namely STRICT, DYNAMIC, and IGNORED, see {ref}`crate-reference:type-object-column-policy`. :STRICT: Reject any sub-column that is not defined upfront. :DYNAMIC: INSERT operations may dynamically add new sub-columns to the object definition. This is the default setting. :IGNORED: Also means DYNAMIC, but dynamically added sub-columns do not cause a schema update, and the new values will not be indexed. Because IGNORED columns are not recorded in the schema, you can insert mixed types into them. For example, one row may insert an integer and the next row may insert an object. Objects with a STRICT or DYNAMIC column policy do not allow this. ```{rubric} Querying DYNAMIC OBJECTs ``` To support querying DYNAMIC OBJECTs using SQL, where keys may not exist within an OBJECT, CrateDB provides the [error_on_unknown_object_key] session setting. It controls the behaviour when querying unknown object keys to dynamic objects. By default, CrateDB will raise an error if any of the queried object keys are unknown. When adjusting this setting to `false`, it will return `NULL` as the value of the corresponding key. :::{dropdown} Example using `SET error_on_unknown_object_key = false;` ``` cr> CREATE TABLE testdrive (item OBJECT(DYNAMIC)); CREATE OK, 1 row affected (0.563 sec) cr> SELECT item['unknown'] FROM testdrive; ColumnUnknownException[Column item['unknown'] unknown] cr> SET error_on_unknown_object_key = false; SET OK, 0 rows affected (0.001 sec) cr> SELECT item['unknown'] FROM testdrive; +-----------------+ | item['unknown'] | +-----------------+ +-----------------+ SELECT 0 rows in set (0.051 sec) ``` ::: ## Learn Written tutorials and video guides about working with CrateDB's container data types. :::{rubric} Articles ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Blog: Handling Dynamic Objects in CrateDB** Learn fundamentals about CrateDB's OBJECT data type. {{ '{}[Handling Dynamic Objects in CrateDB]'.format(blog) }} ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Fundamentals` \ {tags-secondary}`OBJECT` {tags-secondary}`SQL` ::: :::: :::{rubric} Tutorials ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **The Basics of CrateDB Objects** Learn the basics of CrateDB Objects. This tutorial is also available as video [Getting Started with CrateDB Objects]. {{ '{}[Objects in CrateDB]'.format(tutorial) }} ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Fundamentals` {tags-primary}`Docker` \ {tags-secondary}`OBJECT` {tags-secondary}`SQL` ::: :::: ::::{info-card} :::{grid-item} :columns: 9 **Querying Nested Structured Data** Today's data management tasks need to handle multi-structured and {ref}`nested ` data from different data sources. CrateDB's dynamic OBJECT data type allows you to store and analyze complex and nested data efficiently. In this tutorial, we will explore how to leverage this feature in marketing data analysis, along with the use of [generated columns], to parse and manage URLs. {{ '{}(#objects-basics)'.format(tutorial) }} ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Fundamentals` {tags-secondary}`OBJECT` \ {tags-secondary}`SQL` ::: :::: :::{rubric} Videos ::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Getting Started with CrateDB Objects** In this video, you will learn the basics of CrateDB Objects. It illustrates a simple use case to demonstrate how CrateDB Objects can add clarity to data models. - [Getting Started with CrateDB Objects] The talk gives an overview of the column policies for CrateDB OBJECTs: dynamic, strict, and ignored. It also provides examples of how these policies affect INSERT statements. Last but not least, it outlines how to insert, update, and delete records with OBJECT data types. ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` \ {tags-secondary}`OBJECT` {tags-secondary}`SQL` ::: :::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Ingesting and Querying JSON Documents with SQL** Learn how to unleash the power of nested data with CrateDB on behalf of an IoT use case, and a marketing analytics use case, using deeply nested data. - [Unleashing the Power of Nested Data: Ingesting and Querying JSON Documents with SQL] ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` \ {tags-secondary}`OBJECT` {tags-secondary}`SQL` ::: :::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Dynamic Schemas and Indexing Objects** Learn more about OBJECTs from the perspective of dynamic schema evolution and about OBJECT indexing. - [Dynamic Schemas and Indexing Objects] ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` \ {tags-secondary}`OBJECT` {tags-secondary}`SCHEMA` ::: :::: :::{seealso} **Product:** [Multi-model Database] • [JSON Database] • [Dynamic Database Schemas] • [Nested Data Structure] • [Relational Database] ::: [Dynamic Schemas and Indexing Objects]: https://youtu.be/lp51GphV9vo?t=495s&feature=shared [error_on_unknown_object_key]: inv:crate-reference#conf-session-error_on_unknown_object_key [generated columns]: #generated-columns [Getting Started with CrateDB Objects]: https://youtu.be/aQi9MXs2irU?feature=shared [Handling Dynamic Objects in CrateDB]: https://cratedb.com/blog/handling-dynamic-objects-in-cratedb [Objects in CrateDB]: https://community.cratedb.com/t/objects-in-cratedb/1188 [Unleashing the Power of Nested Data: Ingesting and Querying JSON Documents with SQL]: https://youtu.be/S_RHmdz2IQM?feature=shared ```{toctree} :maxdepth: 1 :hidden: learn ``` ```{include} /_include/styles.html ```(join)= (relational)= # Relational / JOINs :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 :::{rubric} Overview ::: CrateDB implements the relational concept of joining tables. :::{rubric} About ::: When selecting data from CrateDB, you can join one or more relations (tables) to combine columns into one result set. :::{rubric} Details ::: Joins are essential operations in relational databases. They create a link between rows based on common values and allow the meaningful combination of these rows. CrateDB was designed to support distributed joins effectively from the very beginning. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [Join concepts][manual-join-concept] - [Join types][manual-join-types] - [Joined relation][manual-joined-relation] ```{rubric} Related ``` - {ref}`sql` - {ref}`query` - {ref}`search-overview` {tags-primary}`SQL` {tags-primary}`JOIN` :::: ::::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 :class: rubric-slim **Blog: Support for Joins on Multi-Row Sub-Selects** Joining on virtual tables is a crucial feature for many users, and it is especially useful when doing analytics on your data. A virtual table is what we call the result set returned by a subquery. Being able to use sub-selects as virtual tables for the lifetime of a query is very useful because it means that you can slice and dice your data multiple ways without having to alter your source data, or store duplicate versions of it. {{ '{}[blog-join-vtable]'.format(blog) }} ```{rubric} Example ``` ```sql SELECT * FROM (SELECT * FROM table_1 ORDER BY column_a LIMIT 100) AS virtual_table_1, INNER JOIN (SELECT * FROM (SELECT * FROM table_2 ORDER BY column_b LIMIT 100) AS virtual_table_2 GROUP BY column_c) AS virtual_table_3 ON virtual_table_1.column_a = virtual_table_3.column_a ``` ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Lab Notes` {tags-secondary}`Join Feature` {tags-secondary}`Sub-Selects` {tags-info}`2018` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Blog: How We Made Joins 23 Thousand Times Faster** Introduces you to the nested-loop join, equi-join, sorted merge vs. hash join, and the block hash join algorithms, and the advancement into a distributed block hash join algorithm. This blog post illustrates the implementation of the distributed block hash join algorithm to support users who want to run joins on large tables for their analytics needs. [![Joins x23 - Part 1](https://img.shields.io/badge/Open-Part%201-darkblue?logo=GitHub)][blog-joins-faster-part1] [![Joins x23 - Part 2](https://img.shields.io/badge/Open-Part%202-darkblue?logo=GitHub)][blog-joins-faster-part2] [![Joins x23 - Part 2](https://img.shields.io/badge/Open-Part%203-darkblue?logo=GitHub)][blog-joins-faster-part3] \ {material-outlined}`link;1.3em` [FOSDEM '22: Distributed Join Algorithms in CrateDB] ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Lab Notes` {tags-secondary}`CrateDB Internals` {tags-secondary}`Join Algorithms` {tags-info}`2018` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 :class: rubric-slim **Blog: Fine-tuning the query optimizer in CrateDB** In cases where you need it, the query optimizer, which tries to find the best logical plan possible for a given query, can be fine-tuned for specific queries, in order to yield better performance. By default, the effective join order is based on table statistics collected into the `pg_catalog.pg_stats` system table. CrateDB offers the option to disable join-reordering and rely exactly on the order of the tables as described in the query, which is helpful to get more control over the query execution. {{ '{}[blog-join-reordering]'.format(blog) }} ```{rubric} Example ``` ```sql SET optimizer_reorder_hash_join = false; SET optimizer_reorder_nested_loop_join = false; ``` ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Tuning Tipps` {tags-secondary}`Join Performance` {tags-secondary}`Performance Settings` {tags-info}`2023` ::: :::: :::{seealso} **Features:** [](#querying) **Domains:** [](#metrics-store) • [](#analytics) • [](#industrial) • [](#timeseries) • [](#machine-learning) **Product:** [Relational Database] • [Indexing, Columnar Storage, and Aggregations] ::: [blog-joins-faster-part1]: https://cratedb.com/blog/joins-faster-part-one [blog-joins-faster-part2]: https://cratedb.com/blog/lab-notes-how-we-made-joins-23-thousand-times-faster-part-two [blog-joins-faster-part3]: https://cratedb.com/blog/lab-notes-how-we-made-joins-23-thousand-times-faster-part-three [blog-join-reordering]: https://cratedb.com/blog/join-performance-to-the-rescue [blog-join-vtable]: https://cratedb.com/blog/joins-multi-row-subselects [FOSDEM '22: Distributed Join Algorithms in CrateDB]: https://cratedb.com/resources/videos/distributed-join-algorithms [manual-join-concept]: inv:crate-reference#concept-joins [manual-join-types]: inv:crate-reference#sql_joins [manual-joined-relation]: inv:crate-reference#sql-select-joined-relation(fts)= (fulltext)= (full-text)= (fulltext-search)= # Full-Text Search :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **BM25 term search based on Apache Lucene, using SQL: CrateDB is all you need.** :::{rubric} Overview ::: CrateDB can be used as a database to conduct full-text search operations building upon Apache Lucene. CrateDB is an exceptional choice for handling complex queries and large-scale data sets. One of its standout features are its full-text search capabilities, built on top of the powerful Lucene library. This makes it a great fit for organizing, searching, and analyzing extensive datasets. :::{rubric} About ::: [Full-text search] leverages the [BM25] search ranking algorithm, effectively implementing the storage and retrieval parts of a [search engine]. For managing a [full-text search] index for text values, Lucene uses an [inverted index] data structure, and the [Okapi BM25] search ranking algorithm. The inverted index data structure is a central component of a typical [search engine indexing] algorithm. Together with ranking, which enables search result relevance features, both effectively provide the storage and retrieval parts of a [search engine]. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [](inv:crate-reference#sql_dql_fulltext_search) - [](inv:crate-reference#fulltext-indices) - [](inv:crate-reference#predicates_match) - [](inv:crate-reference#ref-create-analyzer) ```{rubric} Related ``` - {ref}`sql` - {ref}`geo-search` - {ref}`vector-search` - {ref}`hybrid-search` - {ref}`query` {tags-primary}`SQL` {tags-primary}`Full-Text Search` {tags-primary}`Okapi BM25` :::: ::::: :::{rubric} Full-text search using SQL ::: :::{div} CrateDB uses Lucene as a storage layer, so it inherits the implementation and concepts of Lucene, in the same spirit as the Apache Solr search server, and Elasticsearch. While Elasticsearch uses a [query DSL based on JSON], in CrateDB, you can work with text search using SQL, using a PostgreSQL-compatible interface. ::: :::{rubric} Details about the inverted index ::: Lucene's indexing strategy for text fields relies on a data structure called [inverted index], which enables very efficient search over textual data, and is defined as a "data structure storing a mapping from content, such as words and numbers, to its location in the database file, document or set of documents". Depending on the configuration of a column, the index can be plain (default) or full-text. An index of type "plain" indexes content of one or more fields without analyzing and tokenizing their values into terms. To create a "full-text" index, the field value is first analyzed and, based on the used analyzer, split into smaller units, such as individual words, a processing step called tokenization. A full-text index is then created for each text unit separately. :::{rubric} Details about ranking with BM25 ::: In information retrieval, {abbr}`Okapi BM25 (BM is an abbreviation of "best matching", BM25 stands for "Best Match 25", the 25th iteration of this scoring algorithm.)` is a popular ranking function used by search engines to estimate the relevance of documents to a given search query. The BM25 method has become the default scoring formula in Lucene, and is also the relevance scoring formula used by CrateDB. The article [BM25: The Next Generation of Lucene Relevance] compares traditional TF/IDF to BM25, including illustrative graphs. To learn more details about what's inside, please also refer to [Similarity in Elasticsearch] and [BM25 vs. Lucene Default Similarity]. ## Synopsis Populate and query a Lucene full-text index using SQL. ::::{grid} :padding: 0 :class-row: title-slim :::{grid-item} :columns: auto 6 6 6 **DDL** ```sql CREATE TABLE documents ( name STRING PRIMARY KEY, description TEXT, INDEX ft_english USING FULLTEXT(description) WITH ( analyzer = 'english' ), INDEX ft_german USING FULLTEXT(description) WITH ( analyzer = 'german' ) ); ``` ::: :::{grid-item} :columns: auto 6 6 6 **DML** ```sql INSERT INTO documents (name, description) VALUES ('Quick fox', 'The quick brown fox jumps over the lazy dog.'), ('Franz jagt', 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern.') ; ``` ::: :::{grid-item} :columns: auto 6 6 6 **DQL** ```sql SELECT name, _score FROM documents WHERE MATCH( (ft_english, ft_german), 'jump OR verwahrlost' ) ORDER BY _score DESC; ``` ::: :::{grid-item} :columns: auto 6 6 6 **Result** ```text +------------+------------+ | name | _score | +------------+------------+ | Franz jagt | 0.13076457 | | Quick fox | 0.13076457 | +------------+------------+ SELECT 2 rows in set (0.034 sec) ``` ::: :::: :::{rubric} More Examples ::: Tweak fuzziness to get approximate matches. ```sql SELECT _score, city, country, population FROM cities WHERE MATCH(city_ascii, 'nw yurk') USING best_fields WITH (fuzziness = 1) ORDER BY _score DESC; ``` ## Usage Full-text search in CrateDB means using the MATCH predicate, and optionally configuring analyzers. :::{rubric} MATCH predicate ::: CrateDB's [MATCH predicate] performs a fulltext search on one or more indexed columns or indices and supports different matching techniques. In order to use fulltext searches on a column, a [fulltext index with an analyzer](inv:crate-reference#sql_ddl_index_fulltext) must be created for this column. :::{rubric} Analyzers, Tokenizers, and Filters ::: Analyzers consist of two parts, tokenizers and filters. With CrateDB, you can define custom analyzers, or configure the standard analyzers according to your needs. ## Learn Learn how to set up your database for full-text search, how to create the relevant indices, and how to query your text data efficiently. A few must-reads for anyone looking to make sense of large volumes of unstructured text data. :::{rubric} Advanced Features ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **FTS Options** Learn about the stack of options relevant for full-text search, like applying [](#fts-fuzzy), or using [](#fts-synonyms). {hyper-navigate}`Full-Text Search Options ` ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`FTS Options` \ {tags-secondary}`Fuzzy Matching` \ {tags-secondary}`Synonyms` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Custom Analyzers** This tutorial illustrates how to define custom analyzers using the `CREATE ANALYZER` SQL command, for example to use fuzzy searching, how to use synonym files, and corresponding technical backgrounds about their implementations. {hyper-navigate}`Analyzers, Tokenizers, and Filters ` ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`Full-Text Search` \ {tags-secondary}`Lucene Analyzer` ::: :::: :::{rubric} Tutorials ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Exploring the Netflix catalog using full-text search** The tutorial illustrates the BM25 ranking algorithm for information retrieval, by exploring how to manage a dataset of Netflix titles. {hyper-navigate}`Netflix Tutorial ` ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`Full-Text Search` \ {tags-secondary}`BM25` ::: :::: :::{rubric} Articles ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Indexing and Storage in CrateDB** This article explores the internal workings of the storage layer in CrateDB, with a focus on Lucene's indexing strategies. {hyper-navigate}`Indexing and Storage in CrateDB <[Indexing and Storage in CrateDB]>` The CrateDB storage layer is based on Lucene indexes. Lucene offers scalable and high-performance indexing which enables efficient search and aggregations over documents and rapid updates to the existing documents. We will look at the three main Lucene structures that are used within CrateDB: Inverted Indexes for text values, BKD-Trees for numeric values, and Doc Values. :Inverted Index: You will learn how inverted indexes are implemented in Lucene and CrateDB. :BKD Tree: Better understand the BKD tree, starting from KD trees, and how this data structure supports range queries in CrateDB. :Doc Values: This data structure supports more efficient querying document fields by id, performs column-oriented retrieval of data, and improves the performance of aggregation and sorting operations. ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`Lucene Indexing` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Indexing Text for Both Effective Search and Accurate Analysis** This article explores how Qualtrics uses CrateDB in Text iQ to provide text analysis services for everything from sentiment analysis to identifying key topics, and powerful search-based data exploration. {hyper-navigate}`Indexing Text for Both Effective Search and Accurate Analysis <[Indexing Text for Both Effective Search and Accurate Analysis]>` CrateDB uses Elasticsearch technology under the hood to manage cluster creation and communication, and also exposes an Elasticsearch API that provides access to all the indexing capabilities in Elasticsearch that Qualtrics needed. The articles explains integral parts of an FTS text processing pipeline, including analyzers, optionally using tokenizers or character filters, and how they can be customized to specific needs, using plugins for CrateDB. ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`Analyzer, Tokenizer` \ {tags-secondary}`Plugin` ::: :::: :::{toctree} :maxdepth: 2 :hidden: options analyzer learn ::: [BM25: The Next Generation of Lucene Relevance]: https://opensourceconnections.com/blog/2015/10/16/bm25-the-next-generation-of-lucene-relevation/ [BM25 vs. Lucene Default Similarity]: https://www.elastic.co/blog/found-bm-vs-lucene-default-similarity [full-text search]: https://en.wikipedia.org/wiki/Full_text_search [Indexing and Storage in CrateDB]: https://cratedb.com/blog/indexing-and-storage-in-cratedb [Indexing Text for Both Effective Search and Accurate Analysis]: https://www.qualtrics.com/eng/indexing-text-for-both-effective-search-and-accurate-analysis/ [MATCH predicate]: inv:crate-reference#predicates_match [Okapi BM25]: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/okapi_trec3.pdf [search engine]: https://en.wikipedia.org/wiki/Search_engine [search engine indexing]: https://en.wikipedia.org/wiki/Index_(search_engine) [Similarity in Elasticsearch]: https://www.elastic.co/blog/found-similarity-in-elasticsearch [TREC-3 proceedings]: https://web.archive.org/web/20250316042308/https://trec.nist.gov/pubs/trec3/t3_proceedings.html(geo)= (geo-search)= (geospatial)= (geospatial-search)= # Geospatial Search :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB supports location data for efficiently storing and querying geographic and spatial/geospatial data.** :::{rubric} Overview ::: CrateDB can be used as a database to conduct geospatial search operations building upon the Prefix Tree and BKD-tree index structures of Apache Lucene. :::{rubric} About ::: Using spatial search, you can: - Index points or other shapes. - Filter search results by a bounding box, circle, donut, or other shape. - Sort or boost scoring by distance between points, or relative area between rectangles. :::{rubric} Details ::: CrateDB's GEO_POINT and GEO_SHAPE geographic data types represent points or shapes in a 2D world. :GEO_POINT: A geographic data type used to store latitude and longitude coordinates. :GEO_SHAPE: A geographic data type used to store 2D shapes defined as GeoJSON geometry objects. It supports Point, MultiPoint, LineString, MultiLineString, Polygon, MultiPolygon, and GeometryCollection. When inserting spatial data, you can use [GeoJSON] or [WKT] formats. - Geographic points can be inserted as a double precision array with longitude and latitude values, or by using a WKT string. - Geographic shapes can be inserted as GeoJSON object literal or parameter as seen above and as WKT string. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [](inv:crate-reference#data-types-geo-point) - [](inv:crate-reference#data-types-geo-shape) - [](inv:crate-reference#sql_dql_geo_search) - [distance()](inv:crate-reference#scalar-distance) - [within()](inv:crate-reference#scalar-within) - [intersects()](inv:crate-reference#scalar-intersects) - [latitude() and longitude()](inv:crate-reference#scalar-latitude-longitude) - [geohash()](inv:crate-reference#scalar-geohash) - [area()](inv:crate-reference#scalar-area) ```{rubric} SQLAlchemy ``` - [Geopoint and Geoshape types][SQLAlchemy: Geopoint and Geoshape types] - [Working with geospatial types][SQLAlchemy: Working with geospatial types] ```{rubric} Related ``` - {ref}`sql` - {ref}`fulltext-search` - {ref}`vector-search` - {ref}`hybrid-search` - {ref}`query` {tags-primary}`SQL` {tags-primary}`Geospatial` {tags-primary}`GeoJSON` {tags-primary}`WKT` :::: ::::: :::{rubric} About Lucene ::: CrateDB uses Lucene as a storage layer, so it inherits the implementation and concepts of Lucene, in the same spirit like Solr and Elasticsearch. - [Geospatial Indexing & Search at Scale with Lucene] - [Geospatial Indexing with Apache Lucene and OpenSearch] - [Apache Solr Spatial Search] :::{div} While Elasticsearch uses a [query DSL based on JSON], in CrateDB, you can work with geospatial data using SQL. ::: ## Synopsis Select data points by distance. ```sql /** * Based on the location of the International Space Station, * this query returns the 10 closest capital cities from * the last known position. **/ SELECT city AS "City Name", country AS "Country", DISTANCE(i.position, c.location)::LONG / 1000 AS "Distance [km]" FROM demo.iss i CROSS JOIN demo.world_cities c WHERE capital = 'primary' AND ts = (SELECT MAX(ts) FROM demo.iss) ORDER BY 3 ASC LIMIT 10; ``` ## Usage Using geographic search in CrateDB. :::{rubric} Index Structure Type ::: Computations on very complex polygons and geometry collections are exact but very expensive. To provide fast queries even on complex shapes, CrateDB uses a different approach to store, analyze and query geo shapes. The available geo shape indexing strategies are based on two primary data structures: Prefix and BKD trees, which are described below. There are three geographic index types: `geohash` (default), `quadtree`, and `bkdtree`, described in more detail at [](inv:crate-reference#type-geo_shape-index). :::{rubric} Column Definition ::: Learn how to define a `GEO_SHAPE` column, and how to adjust parameters of the index structure in the documentation section about [](inv:crate-reference#type-geo_shape-definition). :::{rubric} `MATCH` predicate ::: CrateDB's [MATCH predicate for geographical search] can be used to query geographic indices for relations between geographical shapes and points. It supports the **intersects**, **disjoint**, and **within** operations. ## Learn Learn how to use CrateDB's geospatial data types through ORM adapters, tutorials, or example applications. :::{rubric} Articles ::: - [Geometric Shapes Indexing with BKD-trees] :::{rubric} Applications ::: - [Spatial data demo application using CrateDB and the Express framework] - [Plane Spotting with Software Defined Radio (SDN), CrateDB and Node.js] :::{rubric} Tutorials ::: - [Geospatial Queries with CrateDB] - [Berlin and Geo Shapes in CrateDB] :::{rubric} Videos ::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Getting Started with Geospatial Data in CrateDB** Discover how to effortlessly create a table and seamlessly import weather data into CrateDB in this video. Witness the power of CrateDB's time-series query capabilities in action with a weather dataset, showcasing the dynamic schema flexibility. [CrateDB: Querying Multi-Model Heterogeneous Time-Series Data with SQL] Dive deeper into CrateDB's multi-modal features with demonstrations on handling JSON, geospatial data, and conducting full-text searches. ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` \ {tags-secondary}`Geospatial Data` {tags-secondary}`SQL` ::: :::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Let's Go Plane Spotting with Software Defined Radio, CrateDB and Node.js!** Did you know that passing aircraft can be a rich source of real time data? This talk will teach you how to receive and make sense of messages from aircraft in real time using an ADS-B receiver / software defined radio. You'll see how to decode the messages, store them in a CrateDB database, and make sense of them. Finally, the talk demonstrates a system that alerts you when specific types of aircraft are passing by so you can run outside and see that 747 go past. The hardware involved is a Raspberry Pi, a Radarbox flight stick, and a flip dot sign. The software is written in JavaScript and runs on Node.js. ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` \ {tags-secondary}`Geospatial Data` {tags-secondary}`SQL` ::: :::: :::{rubric} Testimonials ::: - [GolfNow chooses CrateDB] for location analytics and commerce. - [Spatially Health chooses CrateDB] for location analytics. :::{seealso} **Product:** [Multi-model Database] • [Geospatial Database] • [Geospatial Data Model] • [Dynamic Database Schemas] • [Nested Data Structure] • [Relational Database] ::: [Apache Solr Spatial Search]: https://solr.apache.org/guide/solr/latest/query-guide/spatial-search.html [Berlin and Geo Shapes in CrateDB]: https://cratedb.com/blog/geo-shapes-in-cratedb [CrateDB: Querying Multi-Model Heterogeneous Time-Series Data with SQL]: https://cratedb.com/resources/videos/unleashing-the-power-of-multi-model-data-querying-heterogeneous-time-series-data-with-sql-in-cratedb [GeoJSON]: https://en.wikipedia.org/wiki/GeoJSON [Geometric Shapes Indexing with BKD-trees]: https://cratedb.com/blog/geometric-shapes-indexing-with-bkd-trees [Geospatial Indexing & Search at Scale with Lucene]: https://portal.ogc.org/files/?artifact_id=90337 [Geospatial Indexing with Apache Lucene and OpenSearch]: https://talks.osgeo.org/foss4g-2022/talk/KPQ97A/ [Geospatial Queries with CrateDB]: https://cratedb.com/blog/geospatial-queries-with-crate-data [GolfNow chooses CrateDB]: https://cratedb.com/resources/videos/interview-golfnow-cratedb [MATCH predicate for geographical search]: inv:crate-reference#sql_dql_geo_match [Plane Spotting with Software Defined Radio (SDN), CrateDB and Node.js]: https://github.com/crate/devrel-plane-spotting-with-cratedb [Spatial data demo application using CrateDB and the Express framework]: https://github.com/crate/devrel-shipping-forecast-geo-demo [Spatially Health chooses CrateDB]: https://cratedb.com/customers/spatially-cratedb-location-analytics [SQLAlchemy: Geopoint and Geoshape types]: inv:sqlalchemy-cratedb#geopoint [SQLAlchemy: Working with geospatial types]: https://cratedb.com/docs/sqlalchemy-cratedb/working-with-types.html#geospatial-types [WKT]: https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry(hnsw)= (vector-store)= (vector-search)= # Vector Search :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: **Vector search on machine learning embeddings: CrateDB is all you need.** :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 :::{rubric} Overview ::: CrateDB can be used as a [vector database] (VDBMS) for storing and retrieving vector embeddings based on the FLOAT_VECTOR data type and its accompanying KNN_MATCH and VECTOR_SIMILARITY functions, effectively conducting HNSW semantic similarity searches on them, also known as vector search. :::{rubric} About ::: Vector search leverages machine learning (ML) to capture the meaning and context of unstructured data, including text and images, transforming it into a numeric representation. Frequently used for semantic search, vector search finds similar data using approximate nearest neighbor (ANN) algorithms. Compared to traditional keyword search, vector search yields more relevant results and executes faster. :::{rubric} Details ::: CrateDB uses Lucene as a storage layer, so it inherits the implementation and concepts of Lucene Vector Search, in the same spirit as Elasticsearch. To learn more details about what's inside, please refer to the [HNSW] graph search algorithm, [how Lucene implemented it][making of Lucene vector search], [how Elasticsearch now also builds on it][Vector search in Elasticsearch], and why effectively [Lucene Is All You Need]. While Elasticsearch uses a [query DSL based on JSON], in CrateDB, you can work with Lucene Vector Search using SQL. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [FLOAT_VECTOR](inv:crate-reference#type-float_vector) - [KNN_MATCH](inv:crate-reference#scalar_knn_match) - [VECTOR_SIMILARITY](inv:crate-reference#scalar_vector_similarity) ```{rubric} Related ``` - {ref}`sql` - {ref}`fulltext-search` - {ref}`geo-search` - {ref}`hybrid-search` - {ref}`machine-learning` - {ref}`query` {tags-primary}`SQL` {tags-primary}`Semantic Search` {tags-primary}`Machine Learning` {tags-primary}`ML Embeddings` {tags-primary}`Vector Store` :::: ::::: ## Synopsis Store and query word embeddings using similarity search based on Euclidean distance. ::::{grid} :padding: 0 :class-row: title-slim :::{grid-item} :columns: auto 5 5 5 **DDL** ```sql CREATE TABLE word_embeddings ( text STRING PRIMARY KEY, embedding FLOAT_VECTOR(4) ); ``` ::: :::{grid-item} :columns: auto 7 7 7 **DML** ```sql INSERT INTO word_embeddings (text, embedding) VALUES ('Exploring the cosmos', [0.1, 0.5, -0.2, 0.8]), ('Discovering moon', [0.2, 0.4, 0.1, 0.7]), ('Discovering galaxies', [0.2, 0.4, 0.2, 0.9]), ('Sending the mission', [0.5, 0.9, -0.1, -0.7]) ; ``` ::: :::{grid-item} :columns: auto 7 7 7 **DQL** ```sql WITH param AS (SELECT [0.3, 0.6, 0.0, 0.9] AS sv) SELECT text, VECTOR_SIMILARITY(embedding, (SELECT sv FROM param)) AS score FROM word_embeddings WHERE KNN_MATCH(embedding, (SELECT sv FROM param), 2) ORDER BY score DESC; ``` ::: :::{grid-item} :columns: auto 5 5 5 **Result** ```text +----------------------+-----------+ | text | score | +----------------------+-----------+ | Discovering galaxies | 0.9174312 | | Exploring the cosmos | 0.9090909 | | Discovering moon | 0.9090909 | | Sending the mission | 0.2702703 | +----------------------+-----------+ SELECT 4 rows in set (0.078 sec) ``` ::: :::: ## Usage Working with vector data in CrateDB. :::{rubric} Pure SQL ::: CrateDB's vector store features are available through SQL and can be used by any application speaking it. The fundamental data type of FLOAT_VECTOR is a plain array of floating point numbers, as such it will be communicated through CrateDB's HTTP and PostgreSQL interfaces. :::{rubric} Framework Integrations ::: CrateDB supports applications using the vector data type through corresponding framework adapters. The page about [](#machine-learning) illustrates all of them, covering both topics about machine learning operations (MLOps), and vector database operations (similarity search). ## Learn Learn how to set up your database for vector search, how to create the relevant indices, and how to semantically query your data efficiently. A few must-reads for anyone looking to make sense of large volumes of unstructured text data. :::{rubric} Tutorials ::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Vector Support and KNN Search through SQL** The addition of vector support and KNN search makes CrateDB the optimal multi-model database for all types of data. Whether it is structured, semi-structured, or unstructured data, CrateDB stands as the all-in-one solution, capable of handling diverse data types with ease. In this feature-focused blog post, we will introduce how CrateDB can be used as a vector database and how the vector store is implemented. We will also explore the possibilities of the K-Nearest Neighbors (KNN) search, and demonstrate vector capabilities with easy-to-follow examples. {{ '{}[Vector support and KNN search in CrateDB]'.format(blog) }} ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Introduction` \ {tags-secondary}`Vector Store` \ {tags-secondary}`SQL` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Retrieval Augmented Generation (RAG) with CrateDB and SQL** This notebook illustrates CrateDB's vector store using pure SQL on behalf of an example exercising a RAG workflow. It uses the white-paper [Time series data in manufacturing] as input data, generates embeddings using OpenAI's ChatGPT, stores them into a table using `FLOAT_VECTOR(1536)`, and queries it using the `KNN_MATCH` and `VECTOR_SIMILARITY` functions. {{ '{}[langchain-rag-sql-github]'.format(nb_github) }} {{ '{}[langchain-rag-sql-colab]'.format(nb_colab) }} {{ '{}[langchain-rag-sql-binder]'.format(nb_binder) }} ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Fundamentals` \ {tags-secondary}`Vector Store` \ {tags-secondary}`LangChain` \ {tags-secondary}`pandas` \ {tags-secondary}`SQL` ::: :::: :::{rubric} Technologies ::: ::::{info-card} :::{grid-item} :columns: auto auto 8 8 **Support for Vector Search in Apache Lucene** Uwe Schindler talks at Berlin Buzzwords 2023 about the new vector search features of Lucene 9, and about the journey of implementing HNSW from 2016 to 2021. - [Uwe Schindler - What's coming next with Apache Lucene?] ::: :::{grid-item} :columns: auto auto 4 4   {tags-primary}`Fundamentals` {tags-secondary}`Lucene` {tags-secondary}`Vector Search` ::: :::: :::{seealso} **Features:** [](#querying) • [](#fulltext) **Domains:** [](#industrial) • [](#machine-learning) • [](#timeseries) **Product:** [Relational Database] • [Vector Database][Vector Database (Product)] ::: [Lucene Is All You Need]: https://arxiv.org/pdf/2308.14963 [making of Lucene vector search]: https://www.apachecon.com/acna2022/slides/04_lucene_vector_search_sokolov.pdf [Time series data in manufacturing]: https://github.com/crate/cratedb-datasets/raw/main/machine-learning/fulltext/White%20paper%20-%20Time-series%20data%20in%20manufacturing.pdf [Uwe Schindler - What's coming next with Apache Lucene?]: https://youtu.be/EHJjSYWjIF0?t=330s&feature=shared [Vector search in Elasticsearch]: https://www.elastic.co/search-labs/blog/articles/vector-search-elasticsearch-rationale [Vector support and KNN search in CrateDB]: https://cratedb.com/blog/unlocking-the-power-of-vector-support-and-knn-search-in-cratedb(hybrid-search)= # Hybrid Search :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **Combined BM25 term search and vector search based on Apache Lucene, using SQL: CrateDB is all you need.** :::{rubric} Overview ::: The capabilities of [](project:#vector-search) are impressive, but it isn't a perfect technology. Without domain-specific datasets to fine-tune models on, a traditional term-based [](project:#fulltext-search) still has a few advantages. :::{rubric} About ::: Vector search unlocks semantic search. Based on existing models, it provides incredible and intelligent data retrieval, but struggles when it comes to adapting to new domains. Combining both approaches, in order to leverage the best from both worlds, is called _hybrid search_, aiming to use the performance potential of vector search, and the zero-shot adaptability of traditional search. :::{rubric} Details ::: Hybrid search as a technique enhances relevancy and accuracy by combining the results of two or more search algorithms, achieving better accuracy and relevancy than each algorithm would individually. CrateDB supports three search functions: - kNN search, using `KNN_MATCH` - Okapi BM25 similarity scoring, using `MATCH` - Geospatial search, using `MATCH` :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - [](inv:crate-reference#sql_dql_fulltext_search) - [](inv:crate-reference#fulltext-indices) - [MATCH](inv:crate-reference#predicates_match) - [KNN_MATCH](inv:crate-reference#scalar_knn_match) - [FLOAT_VECTOR](inv:crate-reference#type-float_vector) :::{rubric} Related ::: - {ref}`sql` - {ref}`fulltext-search` - {ref}`vector-search` - {ref}`query` {tags-primary}`Full-Text Search` {tags-primary}`Semantic Search` {tags-primary}`Vector Search` {tags-secondary}`SQL` {tags-secondary}`BM25` {tags-secondary}`kNN` {tags-secondary}`HNSW` :::: ::::: ## Synopsis A quick impression how a single-query hybrid search looks like. :::::{grid} :padding: 0 ::::{grid-item} :columns: auto 7 7 7 :padding: 1 ```sql WITH vector_search as (vector_query), bm25_search as (bm25_query) SELECT (CONVEX or RRF) as hybrid_score FROM search_method_1, search_method_2 WHERE search_method_1.id = search_method_2.id; ``` :::: ::::{grid-item} :columns: auto 5 5 5 :padding: 3 The SQL expression uses common table expressions for a better structure, and an inner-join to join results from both search methods into a single unified result, based on the application requirements at hand. :::{div} text-smaller The blog article referenced below features a full example that explores hybrid scoring using [convex combination], and in the final version assigns a rank to every row, using a window function and [reciprocal rank fusion (RRF)]. ::: :::: ::::: :::{rubric} Example Results ::: Example result for a search with convex combination scoring, yielding individual scores for bm25 vs. vector, and a synthesized hybrid score. The search expression was `MATCH("content", 'knn search')`. ```postgresql +----------------------+-------------+--------------+----------------------------------------------------------------+ | hybrid_score | bm25_score | vector_score | title | |----------------------|-------------|--------------|----------------------------------------------------------------| | 0.7440367221832276 | 1 | 0.57339454 | knn_match(float_vector, float_vector, int) | | 0.4868442595005036 | 0.5512639 | 0.4438978 | Searching On Multiple Columns | | 0.4716400563716888 | 0.56939983 | 0.40646687 | array_position(anycompatiblearray, anycompatible [, integer ] )| | 0.4702456831932068 | 0.55290174 | 0.41514164 | Text search functions and operators | | 0.4677474081516266 | 0.5523509 | 0.4113451 | Synopsis | +----------------------+-------------+--------------+----------------------------------------------------------------+ ``` Example result for a search with reciprocal rank fusion, assigning a rank to every result row, also yielding ranks for individual components bm25 vs. vector, and a synthesized final rank. The search expression was `MATCH("content", 'knn search')`. ```postgresql +------------+-----------+-------------+----------------------------------------------+ | final_rank | bm25_rank | vector_rank | title | |------------|-----------|-------------|----------------------------------------------| | 0.032786 | 1 | 1 | knn_match(float_vector, float_vector, int) | | 0.031054 | 7 | 2 | Searching On Multiple Columns | | 0.030578 | 8 | 3 | Usage | | 0.028717 | 5 | 15 | Text search functions and operators | | 0.02837 | 10 | 11 | Synopsis | +------------+-----------+-------------+----------------------------------------------+ ``` ## Usage Working with hybrid search in CrateDB. :::{rubric} Pure SQL ::: Querying both CrateDB's inverted index with BM25 scoring for FTS, and navigating the vector space of machine learning embeddings, are available through SQL and can be used by any application speaking it. ## Learn Learn how to use hybrid search techniques in CrateDB, using pure SQL. :::{rubric} Articles ::: :::::{info-card} ::::{grid-item} :columns: auto 9 9 9 **Blog: Hybrid Search in CrateDB** A common scenario is to combine semantic search ([vector search][nearest neighbor search]) with lexical/term/keyword search ([inverted index] + [BM25]). Semantic search excels at understanding the context of a phrase. Lexical search is great at finding how many times a keyword or phrase appears in a document, taking into account the length and the average length of your documents ([TF–IDF]). The article will go through both search methods. You will learn how to combine them, and how to apply different scoring and re-ranking techniques, all using CrateDB and pure SQL. {hyper-navigate}`Doing Hybrid Search in CrateDB ` :::: ::::{grid-item} :columns: auto 3 3 3 :class: rubric-slim {tags-primary}`Introduction` \ {tags-secondary}`Hybrid Search` \ {tags-secondary}`Pure SQL` :::{rubric} What's Inside ::: :::{div} text-smaller - Full-Text Search ([BM25]) - Vector Search ([kNN]/[HNSW]) - [Convex Combination] - [Reciprocal Rank Fusion (RRF)] - SQL: [CTE], [JOIN], [RANK] ::: :::: ::::: [Convex Combination]: https://en.wikipedia.org/wiki/Convex_combination [Reciprocal Rank Fusion (RRF)]: https://www.elastic.co/guide/en/elasticsearch/reference/current/rrf.html(blob)= # BLOB Store :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB provides a blob/object storage subsystem accessible via HTTP, similar to AWS S3.** :::{rubric} Overview ::: CrateDB includes support to store [binary large objects], using its [](inv:crate-reference#blob_support) feature / subsystem. By utilizing CrateDB's cluster features, the files can be replicated and sharded just like regular data. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 ```{rubric} Reference Manual ``` - [](inv:crate-reference#blob_support) {tags-primary}`BLOB Storage` {tags-primary}`Object Storage` :::: ::::: ## Synopsis Example DDL statement. ```sql CREATE BLOB TABLE myblobs CLUSTERED INTO 8 SHARDS with (number_of_replicas=3); ``` ## Learn Learn how to use CrateDB's BLOB store. - [Exploring Blob storage in CrateDB] - [Python driver support for BLOBs] :::{note} {material-outlined}`construction;2em` This page is currently under construction. "About", "Details", and "Usage" sections are missing, and others need expansion. ::: [binary large objects]: https://en.wikipedia.org/wiki/Object_storage [Exploring Blob storage in CrateDB]: https://community.cratedb.com/t/exploring-blob-storage-in-cratedb/938 [Python driver support for BLOBs]: inv:crate-python:*:*label#blobs(clustering)= # Clustering :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB provides scalability through partitioning, sharding, and replication.** :::{rubric} Overview ::: CrateDB uses a shared-nothing architecture to form high-availability, resilient database clusters with minimal effort of configuration, effectively implementing a distributed SQL database. :::{rubric} About ::: CrateDB relies on Lucene for storage and inherits components from Elasticsearch / OpenSearch for cluster consensus. Fundamental concepts of CrateDB are familiar to Elasticsearch users, because both are actually using the same implementation. :::{rubric} Details ::: Sharding and partitioning are techniques used to distribute data evenly across multiple nodes in a cluster, ensuring data scalability, availability, and performance. Replication can be applied to increase redundancy, which reduces the chance of data loss, and to improve read performance. :Sharding: In CrateDB, tables are split into a configured number of shards. Then, the shards are distributed across multiple nodes of the database cluster. Each shard in CrateDB is stored in a dedicated Lucene index. You can think of shards as a self-contained part of a table, that includes both a subset of records and the corresponding indexing structures. Figuring out how many shards to use for your tables requires you to think about the type of data you are processing, the types of queries you are running, and the type of hardware you are using. :Partitioning: CrateDB also supports splitting up data across another dimension with partitioning. Tables can be partitioned by defining partition columns. You can think of a partition as a set of shards. - Partitioned tables optimize access efficiency when querying data, because only a subset of data needs to be addressed and acquired. - Each partition can be backed up and restored individually, for efficient operations. - Tables allow to change the number of shards even after creation time for future partitions. This feature enables you to start out with few shards per partition, and scale up the number of shards for later partitions once traffic and ingest rates increase over the lifetime of your application or system. :Replication: You can configure CrateDB to replicate tables. When you configure replication, CrateDB will ensure that every table shard has one or more copies available at all times. Replication can also improve read performance because any increase in the number of shards distributed across a cluster also increases the opportunities for CrateDB to parallelize query execution across multiple nodes. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Concepts ::: - {ref}`crate-reference:concept-clustering` - {ref}`crate-reference:concept-storage-consistency` - {ref}`crate-reference:concept-resiliency` :::{rubric} Reference Manual ::: - {ref}`crate-reference:ddl-sharding` - {ref}`crate-reference:partitioned-tables` - {ref}`Partition columns ` - {ref}`crate-reference:ddl-replication` :::{rubric} Guides ::: - {ref}`guide:clustering` {tags-primary}`Clustering` {tags-primary}`Sharding` {tags-primary}`Partitioning` {tags-primary}`Replication` :::: ::::: ## Synopsis With a monthly throughput of 300 GB, partitioning your table by month, and using six shards, each shard will manage 50 GB of data, which is within the recommended size range (5 - 50 GB). Through replication, the table will store three copies of your data, in order to reduce the chance of permanent data loss. ```sql CREATE TABLE timeseries_table ( ts TIMESTAMP, val DOUBLE PRECISION, part GENERATED ALWAYS AS date_trunc('month', ts) ) CLUSTERED INTO 6 SHARDS PARTITIONED BY (part) WITH (number_of_replicas = 2); ``` ## Learn Individual characteristics and shapes of data need different sharding and partitioning strategies. Learn about the details of shard allocation, that will support you to choose the right strategy for your data and your most prominent types of workloads. ::::{grid} 2 2 2 2 :padding: 0 :::{grid-item-card} :link: sharding-partitioning :link-type: ref :link-alt: Sharding and Partitioning :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold :class-body: sd-text-center2 sd-fs2-5 :class-footer: text-smaller Sharding and Partitioning ^^^ - Introduction to the concepts of sharding and partitioning. - Learn how to choose a strategy that fits your needs. +++ {material-outlined}`lightbulb;1.8em` An in-depth guide on how to configure sharding and partitioning, presenting best practices and examples. ::: :::{grid-item-card} :link: sharding-performance :link-type: ref :link-alt: Sharding and Partitioning :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold :class-body: sd-text-center2 sd-fs2-5 :class-footer: text-smaller Sharding Performance Guide ^^^ - Optimising for query performance. - Optimising for ingestion performance. +++ {material-outlined}`lightbulb;1.8em` Guidelines about balancing your strategy to yield the best performance for your workloads. ::: :::{grid-item-card} :link: https://community.cratedb.com/t/sharding-and-partitioning-guide-for-time-series-data/737 :link-alt: Sharding and partitioning guide for time-series data :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold :class-body: sd-text-center2 sd-fs2-5 :class-footer: text-smaller Sharding and partitioning guide for time-series data ^^^ A hands-on walkthrough to support you with building a sharding and partitioning strategy for your time series data. +++ {material-outlined}`lightbulb;1.8em` Includes details about partitioning, sharding, and replication. Gives valuable advises about relevant topic matters. ::: ::::(snapshot)= # Snapshots :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB provides a backup mechanism based on snapshots.** :::{rubric} Overview ::: CrateDB, like Elasticsearch, uses snapshots to perform cluster-wide backups of your data. :::{rubric} About ::: A snapshot is a backup of a running CrateDB cluster. You can use snapshots for different purposes. - Regularly back up a cluster with no downtime - Recover data after deletion or a hardware failure - Transfer data between clusters - Reduce your storage costs by out-phasing partitions into cold and frozen data tier repositories and archives :::{rubric} Details ::: CrateDB stores snapshots in an off-cluster storage location called a snapshot repository. Before you can take or restore snapshots, you must register a snapshot repository on the cluster. CrateDB supports both local and remote storage, with the option to choose amongst those repository types: - AWS S3 - Google Cloud Storage (GCS) - Microsoft Azure - Local Filesystem :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:snapshot-restore` :::{rubric} SQL Functions ::: - {ref}`crate-reference:sql-create-repository` - {ref}`crate-reference:sql-drop-repository` - {ref}`crate-reference:sql-create-snapshot` - {ref}`crate-reference:sql-restore-snapshot` - {ref}`crate-reference:ref-drop-snapshot` :::{rubric} System Tables ::: - {ref}`sys.repositories ` - {ref}`sys.snapshots ` - {ref}`sys.snapshots_restore ` {tags-primary}`Backup` {tags-primary}`Restore` {tags-primary}`Snapshot` :::: ::::: ## Synopsis Create a repository and snapshot, inquire available snapshot, and restore it. **Create Repository** ```sql CREATE REPOSITORY backup TYPE fs WITH (location='', compress=false); ``` **Create Snapshot** ```sql CREATE SNAPSHOT backup.snapshot1 ALL WITH (wait_for_completion=true, ignore_unavailable=true); ``` **Inquire Snapshots** ```sql SELECT repository, name, state FROM sys.snapshots ORDER BY repository, name; ``` **Restore Snapshot** ```sql RESTORE SNAPSHOT backup.snapshot1 TABLE quotes WITH (wait_for_completion=true); ``` ## Usage Please find more details about how to use snapshots in the reference documentation about {ref}`snapshots `. Please also consider reading the [Elasticsearch: Snapshot and restore] documentation section, because both CrateDB and Elasticsearch use the same subsystem implementation. Assuring your data is safe is both recommended and crucial for {ref}`upgrading` your cluster to newer software releases. :::{note} {material-outlined}`construction;2em` This page is currently under construction. It only includes the most basic essentials, and needs expansion. For example, the "Learn" section is missing completely, referring to corresponding tutorials and other educational material. ::: [Elasticsearch: Snapshot and restore]: https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html(cloud)= # Cloud Native :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB is designed to support cloud computing from the beginning.** :::{rubric} Overview ::: CrateDB works well in cloud computing environments, managed or unmanaged. :::{rubric} About ::: CrateDB is offered as a managed service available on AWS, Azure, and GCP, and also as a fully open source edition. :::{rubric} Details ::: CrateDB is an SQL database for enterprise data warehouse workloads, that works across clouds and scales with your data. It lets you run analytics over vast amounts of data in near real time, even with complex queries. You can run CrateDB anywhere. Whether you want complete peace of mind with the DBaaS model, or deploy CrateDB yourself, we have the right option for you. CrateDB is highly flexible and can be deployed on private or public cloud, on-premises, edge, or in hybrid environments to meet your organization's needs. - {ref}`cloud:index` is a fully managed, terabyte-scale, and cost-effective analytics database offered as DBaaS. - [CrateDB OSS] is the right option to deploy CrateDB yourself, to contribute features, and to explore and inspect its source code. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:node_discovery` - {ref}`guide:install-cloud` :::{rubric} Guides ::: - {ref}`guide:clustering` - {ref}`guide:azure` - {doc}`guide:install/cloud/aws/index` :::{rubric} CrateDB Cloud ::: - {ref}`cloud:cluster-deployment-marketplace` - {ref}`Documentation ` - [Web Console] {tags-primary}`Cloud` {tags-primary}`AWS` {tags-primary}`Azure` {tags-primary}`GCP` :::: ::::: ## Usage Applications that support operating CrateDB in cloud environments. - {ref}`cloud-cli:index` is a command-line interface (CLI) tool for interacting with CrateDB Cloud. - [CrateDB Kubernetes Operator] provides a convenient way to run CrateDB clusters on Kubernetes. :::{seealso} **Product:** [CrateDB Editions] ::: :::{note} {material-outlined}`construction;2em` This page is currently under construction. It only includes the most basic essentials, and needs expansion. For example, the "Synopsis" and "Learn" sections are missing completely, referring to corresponding tutorials and other educational material. ::: [CrateDB Editions]: https://cratedb.com/database/editions [CrateDB Kubernetes Operator]: https://github.com/crate/crate-operator/ [CrateDB OSS]: https://github.com/crate/crate [Web Console]: https://console.cratedb.cloud/(storage-layer)= # Storage Layer :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: The CrateDB storage layer is based on Lucene. By default, all fields are indexed, nested or not, but the indexing can be turned off selectively. This page enumerates some concepts of Lucene, and the article [Indexing and Storage in CrateDB] goes into more details by exploring its internal workings. ## Lucene Lucene offers scalable and high-performance indexing which enables efficient search and aggregations over documents and rapid updates to the existing documents. Solr and Elasticsearch are building upon the same technologies. - **Documents** A single record in Lucene is called "document", which is a unit of information for search and indexing that contains a set of fields, where each field has a name and value. A Lucene index can store an arbitrary number of documents, with an arbitrary number of different fields. - **Append-only segments** A Lucene index is composed of one or more sub-indexes. A sub-index is called a segment, it is immutable, and built from a set of documents. When new documents are added to the existing index, they are added to the next segment, while previous segments are never modified. If the number of segments becomes too large, the system may decide to merge some segments and discard the freed ones. This way, adding a new document does not require rebuilding the whole index structure completely. - **Column store** For text values, other than storing the row data as-is (and indexing each value by default), each value term is stored into a [column-based store] by default, which offers performance improvements for global aggregations and groupings, and enables efficient ordering, because the data for one column is packed at one place. In CrateDB, the column store is enabled by default and can be disabled only for text fields, not for other primitive types. Furthermore, CrateDB does not support storing values for container and geospatial types in the column store. ## Data structures This section enumerates the three main Lucene data structures that are used within CrateDB: Inverted indexes for text values, BKD trees for numeric values, and DocValues. - **Inverted index** The Lucene indexing strategy for text fields relies on a data structure called inverted index, which is defined as a "data structure storing a mapping from content, such as words and numbers, to its location in the database file, document or set of documents". Depending on the configuration of a column, the index can be plain (default) or full-text. An index of type "plain" indexes content of one or more fields without analyzing and tokenizing their values into terms. To create a "full-text" index, the field value is first analyzed and based on the used analyzer, split into smaller units, such as individual words. A full-text index is then created for each text unit separately. The inverted index enables a very efficient search over textual data. - **BKD tree** To optimize numeric range queries, Lucene uses an implementation of the Block KD (BKD) tree data structure. The BKD tree index structure is suitable for indexing large multi-dimensional point data sets. It is an I/O-efficient dynamic data structure based on the KD tree. Contrary to its predecessors, the BKD tree maintains its high space utilization and excellent query and update performance regardless of the number of updates performed on it. Numeric range queries based on BKD trees can efficiently search numerical fields, including fields defined as `TIMESTAMP` types, supporting performant date range queries. - **DocValues** Because Lucene's inverted index data structure implementation is not optimal for finding field values by given document identifier, and for performing column-oriented retrieval of data, the DocValues data structure is used for those purposes instead. DocValues is a column-based data storage built at document index time. They store all field values that are not analyzed as strings in a compact column, making it more effective for sorting and aggregations. :::{todo} Bring page into the same shape like the others in this section. ::: [column-based store]: https://cratedb.com/docs/crate/reference/en/latest/general/ddl/storage.html [Indexing and Storage in CrateDB]: https://cratedb.com/blog/indexing-and-storage-in-cratedb(hybrid-index)= # Hybrid Index :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 **CrateDB indexes all columns by default, for lightning-fast query responses on your fingertips.** :::{rubric} Overview ::: CrateDB, like Lucene, Elasticsearch, and Rockset, indexes all fields of stored documents by default, yielding instant query performance on everything. :::{rubric} About ::: By default, CrateDB indexes all data in every field, and each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees. The ability to use the per-field data structures to assemble and return search results is what makes CrateDB so fast. :::{rubric} Details ::: For a quick refresh about the technologies behind the storage engine of CrateDB, let us refer you to to a few upstream documentations and articles about Lucene and Elasticsearch. - [Searching and Indexing With Apache Lucene] - [An Introduction to Elasticsearch] - [Elasticsearch: Documents and Indices] See also an article by Rockset, which refers to the same powerful indexing regime, claiming that paradigm would be a unique invention. - [Converged Index™: The Secret Sauce Behind Rockset's Fast Queries] On disk, CrateDB stores data into Lucene indexes. By default, all fields are indexed, nested or not, but the indexing can be turned off selectively. - {ref}`storage-layer` :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:fulltext-indices` - {ref}`crate-reference:type-geo_shape-index` {tags-primary}`Index Types` {tags-primary}`Index Everything` {tags-primary}`Fast Query Execution` :::: ::::: ## Usage Handling data types in the most efficient way, for maximum usability, is built into CrateDB. You automatically leverage its indexing data structures by submitting SQL queries to the execution engine. ## Learn Articles about CrateDB's uniqueness as an "index everything by default" database, insights into the technologies behind, and also comparing it with solutions from other vendors. ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Blog: Indexing and Storage in CrateDB** {{ '{}[Indexing and Storage in CrateDB]'.format(blog) }} Learn about the fundamentals of the CrateDB storage layer, looking at the three main Lucene structures that are used within CrateDB: Inverted Indexes for text values, BKD-trees for numeric values, and Doc Values. ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Fundamentals` \ {tags-secondary}`Converged Indexing` {tags-secondary}`Deep Dive` ::: :::: ::::{info-card} :::{grid-item} :columns: auto 9 9 9 **Blog: Time Series Benchmark on CrateDB and MongoDB** {{ '{}[Time Series Benchmark on CrateDB and MongoDB]'.format(blog) }} {{ '{}[Independent comparison of CrateDB and MongoDB using Time Series Benchmark Suite]'.format(readmore) }} > When using CrateDB, > it's like you've stumbled into an alternative reality where Elastic is a > proper database. [^es-advent] > > -- Henrik Ingo, Nyrkiö Oy, independent database consultant, MongoDB About the revolutionary idea to index all columns, in order to make all queries equally fast, unlocking completely ad hoc exploratory querying. > I knew that Rockset had developed a service where they would index every > column by default, based on their innovative LSM indexing structure, > making such a revolutionary idea even possible. CrateDB is now the > second product I've heard of offering this feature – and now with > Rockset being acquired and shutting down [...] Also about benchmarking CrateDB against MongoDB using the [Distributed Systems Infrastructure (DSI)] benchmark framework and the [TimescaleDB Time Series Benchmark Suite (TSBS)]. [^es-advent]: In the advent of Elasticsearch, users dearly wanted to use it as their primary and only database, but were educated not to. ::: :::{grid-item} :columns: auto 3 3 3 {tags-primary}`Benchmark` \ {tags-secondary}`Converged Indexing` {tags-secondary}`Query Performance` ::: :::: :::{note} {material-outlined}`construction;2em` This page is currently under construction. It only includes the most basic essentials, and needs expansion. For example, the "Synopsis" section is missing completely, and the "Usage" section is a bit thin. ::: [An Introduction to Elasticsearch]: https://dzone.com/articles/an-introduction-to-elasticsearch [Converged Index™: The Secret Sauce Behind Rockset's Fast Queries]: https://web.archive.org/web/20241108051635/https://rockset.com/blog/converged-indexing-the-secret-sauce-behind-rocksets-fast-queries/ [Distributed Systems Infrastructure (DSI)]: https://github.com/nyrkio/dsi [Elasticsearch for Dummies]: https://dzone.com/articles/elasticsearch-for-dummies [Elasticsearch: Documents and Indices]: https://www.elastic.co/guide/en/elasticsearch/reference/current/documents-indices.html [Independent comparison of CrateDB and MongoDB using Time Series Benchmark Suite]: https://blog.nyrkio.com/wp-content/uploads/2024/07/Nyrkio-comparison-of-CrateDB-and-MongoDB-with-TSBS-v2.pdf [Indexing and Storage in CrateDB]: https://cratedb.com/blog/indexing-and-storage-in-cratedb [Searching and Indexing With Apache Lucene]: https://dzone.com/articles/apache-lucene-a-high-performance-and-full-featured [Time Series Benchmark on CrateDB and MongoDB]: https://blog.nyrkio.com/2024/07/11/timeseries-benchmark-on-cratedb-and-mongodb/ [TimescaleDB Time Series Benchmark Suite (TSBS)]: https://github.com/timescale/tsbs(query)= (querying)= (advanced-querying)= # Advanced Querying :::{include} /_include/links.md ::: About all the advanced querying features of CrateDB, unifying data types and query characteristics. Mix full-text search with time series aspects, and run powerful aggregations or other kinds of complex queries on your data. CrateDB supports effective [time series](#timeseries) analysis with fast aggregations, relational features for [JOIN](#join) operations, and a rich set of built-in functions. (at-a-glance)= ## At a Glance :::::{info-card} ::::{grid-item} :columns: auto 9 9 9 **Analyzing Device Readings** Effectively query measurement readings using enhanced features for time series data. Run aggregations with gap filling / interpolation, using common table expressions (CTEs) and LAG / LEAD window functions. Find maximum values using the MAX_BY aggregate function, returning the value from one column based on the maximum or minimum value of another column within a group. :::{code} sql WITH OrderedData AS ( SELECT timestamp, location, temperature, LAG(temperature, 1) IGNORE NULLS OVER w AS prev_temp, LEAD(temperature, 1) IGNORE NULLS OVER w AS next_temp FROM weather_data WINDOW w AS (PARTITION BY location ORDER BY timestamp) ) SELECT timestamp, location, temperature, COALESCE(temperature, (prev_temp + next_temp) / 2) AS interpolated_temperature FROM OrderedData ORDER BY location, timestamp; ::: {{ '{}(#timeseries-analysis-advanced)'.format(tutorial) }} :::: ::::{grid-item} :columns: 3 {tags-primary}`Aggregation` \ {tags-primary}`CTE` \ {tags-primary}`Gap Filling` \ {tags-primary}`Interpolation` \ {tags-primary}`Window Functions` {tags-secondary}`Time Series` \ {tags-secondary}`SQL` :::: ::::: :::::{info-card} ::::{grid-item} :columns: auto 9 9 9 **Time Bucketing** Based on sensor data, this query calculates: - time-buckets of 10 seconds - different aggregations per time-bucket and host group :::{code} sql SELECT FLOOR(EXTRACT(epoch FROM m.timestamp) / 10) * 10 AS period, h.host_group, MIN(m.fields['usage_user']) AS "min", AVG(m.fields['usage_user']) AS "avg", MAX(m.fields['usage_user']) AS "max" FROM telegraf.metrics m LEFT JOIN telegraf.hosts h ON h.host_name = m.tags['host'] WHERE tags['cpu'] = 'cpu-total' AND m.timestamp > NOW() - '150 seconds'::INTERVAL GROUP BY 1, 2 ORDER BY 1 DESC; ::: :::: ::::{grid-item} :columns: 3 {tags-primary}`Aggregation` \ {tags-primary}`Grouping` \ {tags-primary}`Time Bucketing` \ {tags-primary}`Time Intervals` {tags-secondary}`Time Series` \ {tags-secondary}`SQL` :::: ::::: (aggregation)= (aggregations)= ## Aggregations Fast aggregations, even with complex queries. - [Analyzing Device Readings with Metadata Integration] ## Bulk Operations You can use the [bulk operations interface] feature to perform many inserts in a single operation. See also [bulk operations for INSERTs]. The advantages are: - Significantly less internal network traffic than executing each insert statement individually. - Even though you're executing multiple insert statements, the bulk query only needs to be parsed, planned, and executed once. (cte)= (ctes)= ## CTEs [Time Series: Analyzing Weather Data] (hyperloglog)= ## HyperLogLog [HyperLogLog] is an efficient approximate cardinality estimation algorithm. CrateDB's [hyperloglog_distinct] aggregate function calculates an approximate count of distinct non-null values using the [HyperLogLog++] algorithm. See also [Introducing: HyperLogLog]. ## LOCF / NOCB https://community.cratedb.com/t/interpolating-missing-time-series-values/1010 ## LTTB https://community.cratedb.com/t/advanced-downsampling-with-the-lttb-algorithm/1287 (maximum-minimum)= ## Maximum/Minimum Values [Analyzing Device Readings with Metadata Integration] [Time Series: Analyzing Weather Data] ## Search CrateDB provides capabilities for full-text search, vector search, and hybrid search, all based on vanilla SQL. - {ref}`fulltext-search` - {ref}`geo-search` - {ref}`vector-search` - {ref}`hybrid-search` ## Time Bucketing https://community.cratedb.com/t/resampling-time-series-data-with-date-bin/1009 (unnest)= ## UNNEST - [UNNEST] - [Optimizing storage for historic time-series data] - [Ingesting into CrateDB with UNNEST and Node.js] - [Ingesting into CrateDB with UNNEST and Golang] (window-functions)= ## Window Functions - [Window functions in CrateDB] - [Time Series: Analyzing Weather Data] :::{note} {material-outlined}`construction;2em` This page is currently under construction. It only includes a few pointers to advanced use cases, which need expansion. It is also not in the same shape as the other pages in this section. ::: :::{seealso} **Features:** [](#relational) **Domains:** [](#metrics-store) • [](#analytics) • [](#industrial) • [](#timeseries) • [](#machine-learning) **Product:** [Relational Database] • [Indexing, Columnar Storage, and Aggregations] ::: :::{seealso} **Features:** [](#relational) **Domains:** [](#metrics-store) • [](#analytics) • [](#industrial) • [](#timeseries) • [](#machine-learning) **Product:** [Relational Database] • [Indexing, Columnar Storage, and Aggregations] ::: [Analyzing Device Readings with Metadata Integration]: #timeseries-analysis-metadata [bulk operations interface]: inv:crate-reference#http-bulk-ops [bulk operations for INSERTs]: #inserts_bulk_operations [HyperLogLog]: https://en.wikipedia.org/wiki/HyperLogLog [HyperLogLog++]: https://research.google/pubs/hyperloglog-in-practice-algorithmic-engineering-of-a-state-of-the-art-cardinality-estimation-algorithm/ [hyperloglog_distinct]: inv:crate-reference#aggregation-hyperloglog-distinct [Ingesting into CrateDB with UNNEST and Golang]: https://community.cratedb.com/t/connecting-to-cratedb-from-go/642#unnest-5 [Ingesting into CrateDB with UNNEST and Node.js]: https://community.cratedb.com/t/connecting-to-cratedb-with-node-js/751#ingesting-into-cratedb-with-unnest-3 [Introducing: HyperLogLog]: https://cratedb.com/blog/feature-focus-making-things-hyper-fast-fast [Optimizing storage for historic time-series data]: https://community.cratedb.com/t/optimizing-storage-for-historic-time-series-data/762 [Time Series: Analyzing Weather Data]: #timeseries-analysis-weather [UNNEST]: #inserts_unnest [Window functions in CrateDB]: https://community.cratedb.com/t/window-functions-in-cratedb/1398(generated-columns)= # Generated Columns CrateDB's SQL DDL statements accept defining {ref}`crate-reference:ddl-generated-columns`. Those columns values are computed by applying a generation expression in the context of the current row. The generation expression can reference the values of other columns. ## Synopsis :::{rubric} Canonical Example ::: The generation expression is evaluated in the context of the current row. ```sql CREATE TABLE computed ( dividend double precision, divisor double precision, quotient GENERATED ALWAYS AS (dividend / divisor) ); ``` :::{rubric} Current Timestamp ::: Populate a database column with the value of the current timestamp when inserting a row. ```sql CREATE TABLE computed_non_deterministic ( id LONG, last_modified TIMESTAMP WITH TIME ZONE GENERATED ALWAYS AS CURRENT_TIMESTAMP ); ``` :::{rubric} Partition Column ::: Define the partition column, to control how CrateDB distributes data to shards. ```sql CREATE TABLE computed_and_partitioned ( huge_cardinality bigint, big_data text, partition_value GENERATED ALWAYS AS (huge_cardinality % 10) ) PARTITIONED BY (partition_value); ``` :::{note} {material-outlined}`construction;2em` This page is currently under construction. It includes not even the most basic essentials, and needs expansion. For example, the "Usage" and "Learn" sections are missing completely, and it's also not in the same shape as the other pages in this section. :::(server-side-cursor)= (server-side-cursors)= # Server-Side Cursors CrateDB implements the SQL Standard feature [F431 (read-only scrollable cursor)], aka. server-side cursors, aka. portals. A cursor is used to retrieve a small number of rows at a time from a query with a large result set. Such a result set might not be suitable for other ways of consumption, because its size might be larger than the system memory of the machines that process it, both server- and client-side. With F431, you {ref}`crate-reference:sql-declare` a server-side cursor, and iterate it, fetching the rows progressively using {ref}`crate-reference:sql-fetch`. :::{note} {material-outlined}`construction;2em` This page is currently under construction. It includes not even the most basic essentials, and needs expansion. For example, the "Usage" and "Learn" sections are missing completely, and it's also not in the same shape like the other pages in this section. ::: [F431 (read-only scrollable cursor)]: https://renenyffenegger.ch/notes/misc/ISO/9075/features/F431(fdw)= # Foreign Data Wrapper :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 :::{rubric} Overview ::: In the spirit of the PostgreSQL FDW implementation, CrateDB offers the possibility to access database tables on remote database servers as if they would be stored within CrateDB. :::{rubric} About ::: Foreign Data Wrappers allow you to make data in foreign systems available as tables within CrateDB. You can then query these foreign tables like regular user tables. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:administration-fdw` :::{rubric} SQL Functions ::: - {ref}`crate-reference:ref-create-server` - {ref}`crate-reference:ref-drop-server` - {ref}`crate-reference:ref-create-foreign-table` - {ref}`crate-reference:ref-drop-foreign-table` :::{rubric} System Tables ::: - {ref}`crate-reference:foreign_servers` - {ref}`crate-reference:foreign_server_options` - {ref}`crate-reference:foreign_tables` - {ref}`crate-reference:foreign_table_options` - {ref}`crate-reference:user_mappings` - {ref}`crate-reference:user_mapping_options` {tags-primary}`SQL` {tags-primary}`FDW` :::: ::::: ## Synopsis Connect to a remote PostgreSQL server. ```sql CREATE SERVER my_postgresql FOREIGN DATA WRAPPER jdbc OPTIONS (url 'jdbc:postgresql://example.com:5432/') ``` Mount a database table. ```sql CREATE FOREIGN TABLE doc.remote_documents (name text) SERVER my_postgresql OPTIONS (schema_name 'public', table_name 'documents'); ``` :::{note} {material-outlined}`construction;2em` This page is currently under construction. It includes not even the most basic essentials, and needs expansion. For example, the "Details", "Usage" and "Learn" sections are missing completely. ::: :::{seealso} **Product:** [Relational Database] :::(udf)= # User-Defined Functions :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 :::{rubric} Overview ::: CrateDB supports user-defined functions (UDFs) that can be written in JavaScript. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:user-defined-functions` :::{rubric} SQL Functions ::: - {ref}`crate-reference:ref-create-function` - {ref}`crate-reference:ref-drop-function` {tags-primary}`SQL` {tags-primary}`UDF` :::: ::::: ## Synopsis Define function. ```sql CREATE FUNCTION my_subtract_function(integer, integer) RETURNS integer LANGUAGE JAVASCRIPT AS 'function my_subtract_function(a, b) { return a - b; }'; ``` Use function. ```sql SELECT doc.my_subtract_function(3, 1) AS col; ``` ``` +-----+ | col | +-----+ | 2 | +-----+ SELECT 1 row in set (... sec) ``` :::{note} {material-outlined}`construction;2em` This page is currently under construction. It includes not even the most basic essentials, and needs expansion. For example, the "About", "Details", "Usage" and "Learn" sections are missing completely. ::: :::{seealso} **Product:** [Relational Database] :::(replication)= # Cross-Cluster Replication :::{include} /_include/links.md ::: :::{include} /_include/styles.html ::: :::::{grid} :padding: 0 ::::{grid-item} :class: rubric-slim :columns: auto 9 9 9 :::{rubric} Overview ::: Cross-cluster replication, also called logical replication, is a method of data replication across multiple clusters. :::{rubric} About ::: CrateDB uses a "publish and subscribe" model where subscribers pull data from the publications of the publisher they subscribed to. :::{rubric} Details ::: Logical replication is useful for different use cases. - Consolidating data from multiple clusters into a single one for aggregated reports. - Ensure high availability if one cluster becomes unavailable. - Replicating between different compatible versions of CrateDB. Replicating tables created on a cluster with higher major/minor version to a cluster with lower major/minor version is not supported. :::: ::::{grid-item} :class: rubric-slim :columns: auto 3 3 3 :::{rubric} Reference Manual ::: - {ref}`crate-reference:administration-logical-replication` :::{rubric} SQL Functions ::: - {ref}`crate-reference:sql-create-publication` - {ref}`crate-reference:sql-alter-publication` - {ref}`crate-reference:sql-drop-publication` - {ref}`crate-reference:sql-create-subscription` - {ref}`crate-reference:sql-drop-subscription` :::{rubric} System Tables ::: - {ref}`crate-reference:pg_publication` - {ref}`crate-reference:pg_publication_tables` - {ref}`crate-reference:pg_subscription` - {ref}`crate-reference:pg_subscription_rel` {tags-primary}`SQL` {tags-primary}`Logical Replication` :::: ::::: ## Synopsis Create a publication others can subscribe to. ```sql CREATE PUBLICATION temperature_publication FOR TABLE doc.temperature_data; ``` Verify publication has been created. ```sql SELECT * FROM pg_publication; ``` Create a subscription. ```sql CREATE SUBSCRIPTION temperature_subscription CONNECTION 'crate://cratedb.example.net:5432?user=crate&mode=pg_tunnel' PUBLICATION temperature_publication; ``` Verify operational status of subscription. ```sql SELECT subname, r.srrelid::TEXT, srsubstate, srsubstate_reason FROM pg_subscription s LEFT JOIN pg_subscription_rel r ON s.oid = r.srsubid; ``` ## Learn Learn how to set up logical replication between CrateDB clusters. ::::{grid} 2 2 2 2 :padding: 0 :::{grid-item-card} :link: guide:logical_replication_setup :link-type: ref :link-alt: Logical replication setup between CrateDB clusters :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold :class-body: sd-text-center2 sd-fs2-5 :class-footer: text-smaller Logical replication using Docker ^^^ - Hands-on tutorial exercising publishing and subscribing end-to-end. - Uses a workstation setup based on two instances running on Docker or Podman. +++ How to configure logical replication on standalone clusters. ::: :::{grid-item-card} :link: cloud:logical-replication :link-type: ref :link-alt: Logical replication setup on CrateDB Cloud :padding: 3 :class-header: sd-text-center sd-fs-5 sd-align-minor-center sd-font-weight-bold :class-body: sd-text-center2 sd-fs2-5 :class-footer: text-smaller Logical replication on CrateDB Cloud ^^^ - Notes about configuring the feature in the context of Cloud clusters. - It mostly works out of the box. +++ How to configure logical replication on CrateDB Cloud clusters. ::: ::::# Time Series with CrateDB This folder provides examples, tutorials and runnable code on how to use CrateDB for time-series use cases. The tutorials and examples focus on being easy to understand and use. They are a good starting point for your own projects. ## What's inside [![Made with Jupyter](https://img.shields.io/badge/Made%20with-Jupyter-orange?logo=Jupyter)](https://jupyter.org/try) [![Made with Markdown](https://img.shields.io/badge/Made%20with-Markdown-1f425f.svg?logo=Markdown)](https://commonmark.org) This folder provides guidelines and runnable code to get started with time series data in [CrateDB]. Please also refer to the other examples in this repository, e.g. about machine learning, to see predictions and AutoML in action. - [README.md](README.md): The file you are currently reading contains a walkthrough about how to get started with time series and CrateDB, and guides you to corresponding example programs and notebooks. - `timeseries-queries-and-visualization.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](timeseries-queries-and-visualization.ipynb) [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/timeseries/timeseries-queries-and-visualization.ipynb) This notebook explores how to access timeseries data from CrateDB via SQL, load it into pandas data frames, and visualize it using Plotly. It also demonstrates more advanced time series queries in SQL, e.g. aggregations, window functions, interpolation of missing data, common table expressions, moving averages, JOINs and the handling of JSON data. - `exploratory_data_analysis.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](exploratory_data_analysis.ipynb) [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/timeseries/exploratory_data_analysis.ipynb) This notebook explores how to access timeseries data from CrateDB via SQL, and do the exploratory data analysis (EDA) with PyCaret. It also shows how you can generate various plots and charts for EDA, helping you understand data distributions, relationships between variables, and identify patterns. - `time-series-decomposition.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](time-series-decomposition.ipynb) [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/timeseries/time-series-decomposition.ipynb) This notebook illustrates how to extract data from CrateDB and how to use PyCaret for time-series decomposition. Furthermore, it shows how to preprocess data and plot time series decomposition by breaking it down into its basic components: trend, seasonality, and residual (or irregular) fluctuations. - `time-series-anomaly-detection.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](time-series-anomaly-detection.ipynb) [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/timeseries/time-series-anomaly-detection.ipynb) This notebook walks you through the anomaly detection analysis using the PyCaret library. - `weather-data-grafana-dashboard.json` An exported JSON representation of a Grafana dashboard designed to visualize weather data. This dashboard includes a set of pre-defined panels and widgets that display various weather metrics. Additionally, within this dashboard configuration, there are advanced time-series analysis queries. These queries are tailored to fetch, aggregate, interpolate, and process weather data over time. To ensure the dashboard functions correctly, it's necessary to configure the data source within Grafana. This dashboard uses the `grafana-postgresql-datasource` or another configured default data source. In the data source settings, fill in the necessary parameters to connect to your CrateDB instance. This includes setting up the database name (`database=doc`), user, password, and host. - `dask-weather-data-import.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](dask-weather-data-import.ipynb) [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/timeseries/dask-weather-data-import.ipynb) This notebook walks you through an example to download and insert a larger data set via pandas and dask into CrateDB utilizing dask's capabilities to parallelize operations. ## Software Tests For running the software tests, install a development sandbox in this folder, also satisfying all the dependencies. ```console python3 -m venv .venv source .venv/bin/activate pip install -U -r requirements.txt -r requirements-dev.txt ``` Then, invoke the software tests, roughly validating all notebooks within this folder, by running them to completion. ```console time pytest ``` In order to run tests for individual notebooks by name, use the `-k` option for selecting by name fragment. ```console time pytest -k explo time pytest -k visu ``` [CrateDB]: https://github.com/crate/crate# 🛠️ Timeseries QA Assistant with CrateDB, LLMs, and Machine Manuals This project provides a full interactive pipeline for simulating telemetry data from industrial motors, storing that data in CrateDB, and enabling natural-language querying powered by OpenAI — including RAG-style guidance from machine manuals. --- ## 📦 Features - **Synthetic Data Generation** for 10 industrial motors over 30 days - **CrateDB Integration** for timeseries storage and fast querying - **Fictional Machine Manuals** linked to each machine for RAG-style enrichment - **LLM-Powered Chat Interface** with context-aware SQL generation - **Emergency Protocol Suggestions** based on detected anomalies --- ## 🚀 Setup & Installation 1. **Install dependencies** ```bash pip install -r requirements.txt ``` 2. **Create a .env file in the root directory** ``` bash OPENAI_API_KEY=your_openai_api_key CRATEDB_HOST=localhost CRATEDB_PORT=4200 ``` 3. **Ensure CrateDB is running locally (or adapt host/port to remote)** You can use docker-compose with this `docker-compose.yml` ``` yaml version: "3.9" services: cratedb: container_name: cratedb-chatbot image: crate ports: - "4200:4200" - "5432:5432" environment: - CRATE_HEAP_SIZE=1g deploy: replicas: 1 restart_policy: condition: on-failure% ``` Run docker-compose: ``` bash docker-compose pull docker-compose up -d ``` ## Pipeline overview 1. **Generate Timeseries Data** Creates realistic vibration, temperature, and rotation logs every 15 minutes for 10 machines. ``` bash python DataGenerator.py ``` Output should look like: ``` bash timestamp vibration temperature rotations machine_id 0 2025-03-09 10:29:35.015476 0.751030 48.971560 1609.573066 0 1 2025-03-09 10:44:35.015476 0.774157 49.696297 1601.617712 0 2 2025-03-09 10:59:35.015476 0.709293 49.308419 1603.563044 0 3 2025-03-09 11:14:35.015476 0.817229 51.463994 1586.055485 0 4 2025-03-09 11:29:35.015476 0.795769 49.277951 1596.797612 0 ✅ Stored 28800 rows in CrateDB. Total generated readings: 28800 ``` 2. **Generate & Store Machine Manuals** Populates machine_manuals with fictional documentation for each machine, including: • Operational limits • Anomaly detection triggers • Emergency protocols • Maintenance schedules ``` bash python Generate-Manuals.py ``` Output: ``` bash ✅ Fictional machine manuals stored in CrateDB. ``` 3. **Run the Q&A Assistant** Launch the interactive assistant: ``` bash python tag-motor-chat.py ``` Example output: ``` bash Timeseries Q&A Assistant (type 'exit' to quit) Example Questions: • What is the average temperature when vibration > 1.5? • What is the average temperature when vibration > 1.5 for motor 5? • How many anomalies happened last week? • What was the time of highest vibration for each machine? • What should I do if machine 2 has an anomaly? • What does the maintenace plan for machine 1 look like? Data Overview: - Total readings: 1000 - Time range: 2025-04-07 11:29:35 to 2025-04-08 12:14:35 - Machines: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] - Vibration range: 0.76 to 1.11 - Temperature range: 40.69°C to 50.27°C - Rotations range: 1402 RPM to 1492 RPM - Anomalies (vibration > 1.5): 0 ``` ## Supported Queries Try natural language prompts like: • "Show top 5 vibration events last month" • "When was the last anomaly for each machine?" • "What should I do if machine 3 has an anomaly?" → Triggers manual-based response • "How many anomalies occurred between March 10 and March 15?" ## How It Works • The assistant uses OpenAI GPT-3.5 to translate your question into SQL. • SQL is executed directly on CrateDB, pulling up real telemetry. • If anomalies (vibration > 1.5) are found, it retrieves relevant manual sections. • All results are summarized and explained naturally. ## Architecture +--------------------------+ | generate_timeseries.py | | → motor_readings (SQL) | +--------------------------+ ↓ +------------------------+ | generate_manuals.py | | → machine_manuals (SQL)| +------------------------+ ↓ +--------------------------+ | rag-motor-chat.py | | - OpenAI Q&A | | - Manual-based Guidance | +--------------------------+# LangChain and CrateDB ## About LangChain [LangChain] is an open source framework for developing applications powered by language models. It provides a complete set of powerful and flexible components for building context-aware, reasoning applications. Please refer to the [LangChain documentation] for further information. Common end-to-end use cases are: - Analyzing structured data - Chatbots and friends - Document question answering LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex: - [Model I/O][Model I/O]: Interface with language models - [Retrieval][Retrieval]: Interface with application-specific data - [Chains][Chains]: Construct sequences of calls - [Agents][Agents]: Let chains choose which tools to use given high-level directives - [Memory][Memory]: Persist application state between runs of a chain - [Callbacks][Callbacks]: Log and stream intermediate steps of any chain ## What's inside [![Made with Jupyter](https://img.shields.io/badge/Made%20with-Jupyter-orange?logo=Jupyter)](https://jupyter.org/try) [![Made with Markdown](https://img.shields.io/badge/Made%20with-Markdown-1f425f.svg?logo=Markdown)](https://commonmark.org) This folder provides guidelines and runnable code to get started with [LangChain] and [CrateDB]. - [readme.md](readme.md): The file you are currently reading contains a walkthrough about how to get started with the LangChain framework and CrateDB, and guides you to corresponding example programs how to use different subsystems. - [requirements.txt](requirements.txt): Pulls in a patched version of LangChain, as well as the CrateDB client driver and the `crash` command-line interface. - `vector_store.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](vector_search.ipynb) [![Launch Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/crate/cratedb-examples/main?labpath=topic%2Fmachine-learning%2Fllm-langchain%2Fvector_search.ipynb) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/vector_search.ipynb) This notebook explores CrateDB's [`FLOAT_VECTOR`] and [`KNN_MATCH`] functionalities for storing and retrieving embeddings, and for conducting similarity searches. - `document_loader.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](document_loader.ipynb) [![Launch Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/crate/cratedb-examples/main?labpath=topic%2Fmachine-learning%2Fllm-langchain%2Fdocument_loader.ipynb) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/document_loader.ipynb) The notebook about the Document Loader demonstrates how to query a database table in CrateDB and use it as a source provider for LangChain documents. - `conversational_memory.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](conversational_memory.ipynb) [![Launch Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/crate/cratedb-examples/main?labpath=topic%2Fmachine-learning%2Fllm-langchain%2Fconversational_memory.ipynb) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/conversational_memory.ipynb) LangChain also supports managing conversation history in SQL databases. This notebook exercises how that works with CrateDB. - `cratedb-vectorstore-rag-openai-sql.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](conversational_memory.ipynb) [![Launch Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/crate/cratedb-examples/main?labpath=topic%2Fmachine-learning%2Fllm-langchain%2Fcratedb-vectorstore-rag-openai-sql.ipynb) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/cratedb-vectorstore-rag-openai-sql.ipynb) This example intentionally shows how to use the CrateDB Vector Store using SQL. There might be cases where the default parameters of the LangChain integration are not sufficient, or you need to use more advanced SQL queries. The example still uses LangChain components to split a PDF file into chunks, leverages OpenAI to calculate embeddings, and to execute the request towards an LLM. - Accompanied to the Jupyter Notebook files, there are also basic variants of corresponding examples, [vector_search.py](vector_search.py), [document_loader.py](document_loader.py), and [conversational_memory.py](conversational_memory.py). - `cratedb_rag_customer_support_langchain.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](cratedb_rag_customer_support_langchain.ipynb)[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/cratedb_rag_customer_support_langchain.ipynb) This example illustrates the RAG implementation of a customer support scenario. The dataset used in this example is based on a collection of customer support interactions from Twitter related to Microsoft products or services. The example shows how to use the CrateDB vector store functionality to create a retrieval augmented generation (RAG) pipeline. To implement RAG we use the Python client driver for CrateDB and vector store support in LangChain. - `cratedb_rag_customer_support_vertexai.ipynb` [![Open on GitHub](https://img.shields.io/badge/Open%20on-GitHub-lightgray?logo=GitHub)](cratedb_rag_customer_support_vertexai.ipynb)[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/cratedb_rag_customer_support_vertexai.ipynb) This example illustrates the RAG implementation of a customer support scenario. It is based on the previous notebook, and it illustrates how to use Vertex AI platform on Google Cloud for RAG pipeline. ## Install In order to properly set up a sandbox environment to explore the example notebooks and programs, it is advised to create a Python virtualenv, and install the dependencies into it. In this way, it is easy to wipe your virtualenv and start from scratch anytime. ```shell python3 -m venv .venv source .venv/bin/activate pip install -U -r requirements.txt ``` ## Setup The upcoming commands expect that you are working on a terminal with activated virtualenv. ```shell source .venv/bin/activate ``` ### CrateDB on localhost In order to spin up a CrateDB instance without further ado, you can use Docker or Podman. ```shell docker run --rm -it \ --name=cratedb --publish=4200:4200 --publish=5432:5432 \ --env=CRATE_HEAP_SIZE=4g crate -Cdiscovery.type=single-node ``` ### CrateDB Cloud Sign up or log in to [CrateDB Cloud], and create a free tier cluster. Within just a few minutes, a cloud-based development environment is up and running. As soon as your project scales, you can easily move to a different cluster tier or scale horizontally. ## Testing Run all tests. ```shell pytest ``` Run tests selectively. ```shell pytest -k document_loader pytest -k "notebook and loader" ``` In order to force a regeneration of the Jupyter Notebook, use the `--nb-force-regen` option. ```shell pytest -k document_loader --nb-force-regen ``` [Agents]: https://python.langchain.com/docs/modules/agents/ [Callbacks]: https://python.langchain.com/docs/modules/callbacks/ [Chains]: https://python.langchain.com/docs/modules/chains/ [CrateDB]: https://github.com/crate/crate [CrateDB Cloud]: https://console.cratedb.cloud [`FLOAT_VECTOR`]: https://crate.io/docs/crate/reference/en/master/general/ddl/data-types.html#float-vector [`KNN_MATCH`]: https://crate.io/docs/crate/reference/en/master/general/builtins/scalar-functions.html#scalar-knn-match [LangChain]: https://www.langchain.com/ [LangChain documentation]: https://python.langchain.com/ [Memory]: https://python.langchain.com/docs/modules/memory/ [Model I/O]: https://python.langchain.com/docs/modules/model_io/ [Retrieval]: https://python.langchain.com/docs/modules/data_connection/# CrateDB Blog | Leveraging Shared Nothing Architecture and Multi-Model Databases for Scalable Real-Time Analytics _Real-Time Unified Data Layers: A New Era for Scalable Analytics, Search, and AI._ Modern data ecosystems are often fragmented, with scattered data sources, storage systems, and pipelines designed to meet specific business needs. When organizations demand advanced analytics, real-time applications, or machine learning models, these siloed systems struggle to scale and integrate effectively. Combining a Shared Nothing Architecture with a multi-model approach provides an innovative solution to these challenges, enabling scalability, fault tolerance, and flexibility across distributed environments. Understanding Shared Nothing Architecture in Distributed Databases ------------------------------------------------------------------ Distributed databases store and process data across multiple nodes that work as a unified system. In a Shared Nothing Architecture, each node operates independently with its own CPU, memory, and storage. This design eliminates shared resource bottlenecks and offers several advantages: * **Horizontal scalability**: Nodes can be added or removed dynamically, allowing the system to handle increasing data volumes and workloads without disrupting performance. * **Fault tolerance**: If a node fails, the system remains operational with no downtime as other nodes compensate, ensuring high availability. * **Performance optimization**: By avoiding shared resources, Shared Nothing Architecture minimizes latency and ensures consistent throughput for tasks like analytics and transactional queries. Shared Nothing Architecture is especially effective for use cases that require stream processing and high reliability, such as real-time analytics and advanced search. The Multi-model Database Approach --------------------------------- Data in modern organizations exists in diverse formats, including relational tables, JSON documents, key-value pairs, and time-series data. Traditional databases are often limited to a single data model, forcing organizations to use multiple systems to manage these formats, leading to complexity and data silos. Multi-model databases address this challenge by supporting multiple data models within a single system. Their benefits include: * Unified data management: A single platform can handle structured, semi-structured, and unstructured data, reducing the need for multiple databases. * Flexible querying: Multi-model databases often use familiar query languages like SQL, simplifying data access and reducing the need for specialized skills. * Cost and operational efficiency: Consolidating workloads into one system minimizes infrastructure costs and simplifies management. * Adaptability to evolving use cases: Multi-model databases are versatile, making them ideal for applications like analytics, IoT, machine learning, generative AI, and agentic AI systems. Combining Shared Nothing Architecture and Multi-model Databases --------------------------------------------------------------- While Shared Nothing Architecture ensures scalability and fault tolerance, multi-model databases provide the flexibility to integrate and query diverse data. Together, they form a robust solution for modern data challenges. Changing existing systems is not always the right solution, it is more efficient to implement a sidecar approach, where the database integrates with the different data sources. This approach provides the scalability and flexibility needed to perform projects quickly without going through major infrastructure overhauls. CrateDB: A Practical Example ---------------------------- CrateDB, a modern database for real-time analytics and hybrid search, showcases the advantages of combining [Shared Nothing Architecture](https://cratedb.com/product/features/shared-nothing-architecture) with a [multi-model](https://cratedb.com/resources/white-papers/lp-wp-multi-model-data-management) approach. Built on Shared Nothing Architecture principles, CrateDB delivers distributed scalability and supports diverse data types, making it a practical choice for modern data needs. * **Native SQL for flexible querying**: CrateDB allows users to query relational, document, time-series, geospatial, full-text, and vector data using SQL, eliminating the need for multiple query languages or manual transformations. * **Horizontal scalability**: CrateDB’s Shared Nothing Architecture design distributes workloads dynamically, ensuring high performance even as data volumes grow. * **Schema flexibility**: CrateDB supports schema evolution, enabling organizations to integrate new data sources and adapt to evolving requirements without disruption. * **Seamless integration**: CrateDB offers unified access to diverse data sources, eliminating silos and improving data governance. * **Cost efficiency**: CrateDB is very easy to operate and has a very low footprint compared to other solutions, offering a lower TCO and having a positive impact on environmental efforts. * **Multi-cloud and hybrid support**: Offered as a service, CrateDB ensures a consistent experience across different cloud providers (AWS, Azure, and GCP). It can also be deployed on-premises to support hybrid scenarios. * **Suited for modern use cases**: CrateDB can ingest complex and large data streams, index all fields instantly, and perform complex aggregations, ad-hoc queries, and search in real-time. Conclusion ---------- Combining Shared Nothing Architecture with a multi-model approach offers a powerful solution for managing distributed data environments. By integrating CrateDB as a sidecar database, organizations can modernize their data architectures for real-time analytics and hybrid search, while avoiding significant disruptions and minimizing costs. This strategy delivers scalable, flexible, and cost-effective data management, empowering businesses to optimize their data ecosystems and thrive in a data-driven world.Which Database for Digital Twin Projects? ========================================= Introduction to Digital Twins ----------------------------- Digital twins are virtual representations of physical objects, processes, or systems in the digital realm. By integrating real-time data, analytics, and simulation models, they create a dynamic, virtual counterpart of a physical entity. This technology enables organizations to gain deeper insights into their assets and operations, leading to improved performance, cost savings, and better decision-making. Digital twins continuously collect data on operational status, environmental conditions, and user interactions. This information is gathered from sensors, user inputs, and various data sources, ensuring that the virtual model accurately reflects the real-world object in real time. To function effectively, digital twins require both real-time and historical data, high data accuracy, and a wide range of data types, including sensor readings, operational metrics, and environmental factors. A digital twin system typically consists of three key database components: * Data ingestion layer – Collects and integrates data from multiple sources. * Data processing layer – Analyzes and interprets incoming data. * Data storage layer – Archives and manages historical data. Digital twins are used across industries for applications such as predictive maintenance, process optimization, product development, and real-time monitoring. They play a crucial role in sectors like manufacturing, healthcare, and smart cities, driving efficiency and innovation. * **Predictive Maintenance:** By monitoring real-time data from a physical asset, a digital twin can detect anomalies and predict maintenance needs, optimizing asset performance and reducing downtime. * **Performance Optimization:** Digital twins enable continuous monitoring and analysis of various parameters, allowing for optimization of processes, systems, or products to enhance efficiency and effectiveness. * **Simulation and Testing:** Digital twins can be used for simulating and testing scenarios, allowing for experimentation and evaluation without the need for physical prototypes. * **Product Lifecycle Management:** From design and manufacturing to operation and maintenance, digital twins can provide valuable insights throughout a product's lifecycle, facilitating decision-making and improving overall performance. Digital twins offer a way to bridge the gap between the physical and digital worlds. Whether it’s for predictive maintenance, performance optimization, simulation and testing or product lifecycle management, digital twins offer huge potential to improve operational efficiency and position enterprises for future growth. CrateDB as the database for digital twins ----------------------------------------- CrateDB is a perfect database to underpin your digital twin initiative and significantly enhances the effectiveness and capabilities of digital twin implementations while reducing development efforts and optimizing total cost of ownership. ### Comprehensive data collection and flexible data modeling CrateDB can collect and store a wide range of data from various sources: real-time sensor data, historical data, geospatial data, operational parameters, environmental conditions, and other relevant information about the physical entity being modeled. CrateDB offers the capabilities to store complex objects before even knowing what you want to model. New data types and formats can be added on the fly without any need for human intervention, removing the need of having multiple databases to synchronize. ### Scalability and Performance CrateDB is scalable from one to hundreds of nodes and can handle huge volumes of information. CrateDB also provides [high-performance capabilities](https://cratedb.com/product/features/query-performance) with query response time in milliseconds to process and analyze the data efficiently - including querying the twins and their relationships - ensuring real-time insights and responsiveness. There is no need to downsample or pre-aggregate the data. ### Data Integration CrateDB offers easy 3rd party integration with many solutions for ingestion, visualization, reporting, and analysis thanks to [native SQL](https://cratedb.com/product/features/native-sql) and the [PostgreSQL Wire Protocol](https://cratedb.com/product/features/postgresql-wire-protocol), drivers and libraries for many programming languages, and its [REST API](https://cratedb.com/product/features/rest-api). ### Time-Series Data Management CrateDB offers advanced time-series capabilities, including instant access to data regardless of the volume of data, thanks to its [distributed architecture](https://cratedb.com/product/features/distributed-database) with efficient [sharding](https://cratedb.com/product/features/sharding) and [partitioning](https://cratedb.com/product/features/partitioning) mechanisms. It supports efficient storage, retrieval, and querying of temporal data to enable trend analysis, forecasting, and historical comparisons. ### Metadata and Contextual Information CrateDB offers a unique repository to store and retrieve metadata associated with digital twins. This includes information about the physical entity, data sources, data quality and modeling assumptions. Time-series data can be contextualized with this information in real-time. This way, you can easily switch from a technical view to a business view. ### Data Analytics and AI Integration CrateDB facilitates the integration of data analytics and AI technologies. It supports running complex algorithms, machine learning models, and statistical analysis directly on the stored data. CrateDB also provides APIs, drivers and the PostgreSQL Wire Protocol to connect with external analytics tools and platforms.