Go to file
琉易 31b57a494b
fix: #295 #297 optimize impala data types suggestion and create table (#296)
* fix: #295 impala data types suggestion

* fix: #297 impala create table with STRUCT
2024-04-21 12:10:36 +08:00
.github docs: update feature_request issue template 2024-02-06 14:32:23 +08:00
.husky chroe: devops (#180) 2023-10-13 11:16:36 +08:00
docs docs: replace index png 2023-12-25 16:10:34 +08:00
scripts build: open ts strict check (#279) 2024-03-27 19:04:16 +08:00
src fix: #295 #297 optimize impala data types suggestion and create table (#296) 2024-04-21 12:10:36 +08:00
test fix: #295 #297 optimize impala data types suggestion and create table (#296) 2024-04-21 12:10:36 +08:00
.czrc chroe: devops (#180) 2023-10-13 11:16:36 +08:00
.gitignore Fix/split listener (#228) 2023-12-08 18:33:16 +08:00
.lintstagedrc.js support format g4 (#232) 2023-12-12 20:05:21 +08:00
.npmignore build: ignore useless file 2023-01-06 10:19:26 +08:00
.prettierignore chroe: devops (#180) 2023-10-13 11:16:36 +08:00
.prettierrc chroe: devops (#180) 2023-10-13 11:16:36 +08:00
antlr.format.json support format g4 (#232) 2023-12-12 20:05:21 +08:00
CHANGELOG.md chore(release): 4.0.0-beta.4.14 2024-04-19 17:22:16 +08:00
commitlint.config.js chroe: devops (#180) 2023-10-13 11:16:36 +08:00
CONTRIBUTING.md feat: #190 improve mysql grammer (#196) 2023-11-27 15:25:40 +08:00
jest.config.js feat: add toMatchUnorderedArrary matcher and apply it (#271) 2024-03-01 16:48:53 +08:00
LICENSE Create LICENSE 2023-10-23 17:50:47 +08:00
package.json chore(release): 4.0.0-beta.4.14 2024-04-19 17:22:16 +08:00
pnpm-lock.yaml feat: migrate to antlr4ng (#267) 2024-02-26 20:25:09 +08:00
README-zh_CN.md docs: update README (#280) 2024-03-28 19:39:06 +08:00
README.md docs: update README (#280) 2024-03-28 19:39:06 +08:00
tsconfig.json build: open ts strict check (#279) 2024-03-27 19:04:16 +08:00
yarn.lock feat: migrate to antlr4ng (#267) 2024-02-26 20:25:09 +08:00

dt-sql-parser

NPM version NPM downloads Chat

English | 简体中文

dt-sql-parser is a SQL Parser project built with ANTLR4, and it's mainly for the BigData field. The ANTLR4 generated the basic Parser, Visitor, and Listener, so it's easy to complete the Lexer, Parser, traverse the AST, and so on features.

Additionally, it provides advanced features such as SQL Validation, Code Completion and Collecting Table and Columns in SQL.

Supported SQL:

  • MySQL
  • Flink
  • Spark
  • Hive
  • PostgreSQL
  • Trino
  • Impala

Tips: This project is the default for Typescript target, also you can try to compile it to other languages if you need.


Integrating SQL Parser with Monaco Editor

We also have provided monaco-sql-languages to easily to integrate dt-sql-parser with monaco-editor.


Installation

# use npm
npm i dt-sql-parser --save

# use yarn
yarn add dt-sql-parser

Usage

We recommend learning the fundamentals usage before continuing. The dt-sql-parser library provides SQL classes for different types of SQL.

import { MySQL, FlinkSQL, SparkSQL, HiveSQL, PostgreSQL, TrinoSQL, ImpalaSQL } from 'dt-sql-parser';

Before using syntax validation, code completion, and other features, it is necessary to instantiate the Parser of the relevant SQL type. For instance, one can consider using MySQL as an example:

const mysql = new MySQL();

The following usage examples will utilize the MySQL, and the Parser for other SQL types will be used in a similar manner as MySQL.

Syntax Validation

First instanced a Parser object, then call the validate method on the SQL instance to validate the sql content, if failed returns an array includes error message.

import { MySQL } from 'dt-sql-parser';

const mysql = new MySQL();
const incorrectSql = 'selec id,name from user1;';
const errors = mysql.validate(incorrectSql);

console.log(errors); 

output:

/*
[
  {
    endCol: 5,
    endLine: 1,
    startCol: 0,
    startLine: 1,
    message: "..."
  }
]
*/

Tokenizer

Call the getAllTokens method on the SQL instance:

import { MySQL } from 'dt-sql-parser';

const mysql = new MySQL()
const sql = 'select id,name,sex from user1;'
const tokens = mysql.getAllTokens(sql)

console.log(tokens)

output:

/*
[
  {
    channel: 0
    column: 0
    line: 1
    source: [SqlLexer, InputStream]
    start: 0
    stop: 5
    tokenIndex: -1
    type: 137
    _text: null
  },
  ...
]
*/

Visitor

Traverse the tree node by the Visitor:

import { MySQL, MySqlParserVisitor } from 'dt-sql-parser';

const mysql = new MySQL();
const sql = `select id, name from user1;`;
const parseTree = mysql.parse(sql);

class MyVisitor extends MySqlParserVisitor<string> {
    defaultResult(): string {
        return '';
    }
    aggregateResult(aggregate: string, nextResult: string): string {
        return aggregate + nextResult;
    }
    visitProgram = (ctx) => {
        return this.visitChildren(ctx);
    };
    visitTableName = (ctx) => {
        return ctx.getText();
    };
}
const visitor = new MyVisitor();
const result = visitor.visit(parseTree);

console.log(result);

output:

/*
user1
*/

Listener

Access the specified node in the AST by the Listener

import { MySQL, MySqlParserListener } from 'dt-sql-parser';

const mysql = new MySQL();
const sql = 'select id, name from user1;';
const parseTree = mysql.parse(sql);

class MyListener extends MySqlParserListener {
    result = '';
    enterTableName = (ctx): void => {
        this.result = ctx.getText();
    };
}

const listener = new MyListener();
mysql.listen(listener, parseTree);

console.log(listener.result)

output:

/*
user1
*/

Splitting SQL statements

Take FlinkSQL as an example, call the splitSQLByStatement method on the SQL instance:

import { FlinkSQL } from 'dt-sql-parser';

const flink = new FlinkSQL();
const sql = 'SHOW TABLES;\nSELECT * FROM tb;';
const sqlSlices = flink.splitSQLByStatement(sql);

console.log(sqlSlices)

output:

/*
[
  {
    startIndex: 0,
    endIndex: 11,
    startLine: 1,
    endLine: 1,
    startColumn: 1,
    endColumn: 12,
    text: 'SHOW TABLES;'
  },
  {
    startIndex: 13,
    endIndex: 29,
    startLine: 2,
    endLine: 2,
    startColumn: 1,
    endColumn: 17,
    text: 'SELECT * FROM tb;'
  }
]
*/

Code Completion

Obtaining code completion information at a specified position in SQL.

Call the getAllEntities method on the SQL instance, pass the SQL content and the row and column numbers indicating the position where code completion is desired. The following are some additional explanations about CaretPosition.

  • keyword candidates list

    import { FlinkSQL } from 'dt-sql-parser';
    
    const flink = new FlinkSQL();
    const sql = 'CREATE ';
    const pos = { lineNumber: 1, column: 16 }; // the end position
    const keywords = flink.getSuggestionAtCaretPosition(sql, pos)?.keywords;
    
    console.log(keywords);
    

    output:

    /*
    [ 'CATALOG', 'FUNCTION', 'TEMPORARY', 'VIEW', 'DATABASE', 'TABLE' ] 
    */
    
  • Obtaining information related to grammar completion

    import { FlinkSQL } from 'dt-sql-parser';
    
    const flink = new FlinkSQL();
    const sql = 'SELECT * FROM tb';
    const pos = { lineNumber: 1, column: 16 }; // after 'tb'
    const syntaxSuggestions = flink.getSuggestionAtCaretPosition(sql, pos)?.syntax;
    
    console.log(syntaxSuggestions);
    

    output:

    /*
    [
      {
        syntaxContextType: 'table',
        wordRanges: [
          {
            text: 'tb',
            startIndex: 14,
            stopIndex: 15,
            line: 1,
            startColumn: 15,
            stopColumn: 16
          }
        ]
      },
      {
        syntaxContextType: 'view',
        wordRanges: [
          {
            text: 'tb',
            startIndex: 14,
            stopIndex: 15,
            line: 1,
            startColumn: 15,
            stopColumn: 16
          }
        ]
      }
    ]
    */
    

The grammar-related code completion information returns an array, where each item represents what grammar can be filled in at that position. For example, the output in the above example represents that the position can be filled with either a table name or a view name. In this case, syntaxContextType represents the type of grammar that can be completed, and wordRanges represents the content that has already been filled.

Get all entities in SQL (e.g. table, column)

Call the getAllEntities method on the SQL instance, and pass in the sql text and the row and column numbers at the specified location to easily get them.

  import { FlinkSQL } from 'dt-sql-parser';

  const flink = new FlinkSQL();
  const sql = 'SELECT * FROM tb;';
  const pos = { lineNumber: 1, column: 16 }; // tb 的后面
  const entities = flink.getAllEntities(sql, pos);

  console.log(entities);

output

/*
  [
    {
      entityContextType: 'table',
      text: 'tb',
      position: {
        line: 1,
        startIndex: 14,
        endIndex: 15,
        startColumn: 15,
        endColumn: 17
      },
      belongStmt: {
        stmtContextType: 'selectStmt',
        position: [Object],
        rootStmt: [Object],
        parentStmt: [Object],
        isContainCaret: true
      },
      relatedEntities: null,
      columns: null,
      isAlias: false,
      origin: null,
      alias: null
    }
  ]
*/

Position is not required, if the position is passed, then in the collected entities, if the entity is located under the statement where the corresponding position is located, then the statement object to which the entity belongs will be marked with isContainCaret, which can help you quickly filter out the required entities when combined with the code completion function.

Other API

  • createLexer Create an instance of Antlr4 Lexer and return it;
  • createParser Create an instance of Antlr4 parser and return it;
  • parse Parses the input SQL and returns the parse tree;

Position and Range

Some return results of the APIs provided by dt-sql-parser contain text information, among which the range and start value of line number, column number and index may cause some confusion.

Index

The index starts at 0. In the programming field, it is more intuitive.

index-image

For an index range, the start index starts from 0 and ends with n-1, as shown in the figure above, an index range of blue text should be represented as follows:

{
    startIndex: 0,
    endIndex: 3
}

Line

The line starts at 1.

line-image

For a range of multiple lines, the line number starts from 1 and ends with n. A range of the first and second lines is represented as follows:

{
    startLine: 1,
    endLine: 2
}

Column

The column also starts at 1.

column-image

It is easier to understand by comparing the column number with the cursor position of the editor. For a range of multiple columns, the column number starts from 1 and ends with n+1, as shown in the figure above, a range of blue text columns is represented as follows:

{
    startColumn: 1,
    endColumn: 5
}

CaretPosition Of Code Completion

The code completion of dt-sql-parser was designed to be used in the editor, so the format of the second parameter (CaretPosition) of the getSuggestionAtCaretPosition method is line and column number instead of character position index. This makes it easier to integrate the code completion into the editor. For the editor, it only needs to get the text content and cursor position in the editor at a specific time to call the code completion of dt-sql-parser, without any additional calculation.

But in some other scenarios, you may need to get the caret position required by the code completion through conversion or calculation. Then, there are some precautions that you may need to care about before that.

The code completion of dt-sql-parser depends on antlr4-c3, which is a great library. The code completion of dt-sql-parser is just encapsulated and converted based on antlr4-c3, including converting the line and column number information into the token index required by antlr4-c3, as shown in the figure below:

column-image

Regard the column in the figure as the cursor position, and put this text into the editor, you will get 13 possible cursor positions, while for dt-sql-parser, this text will generate 4 Tokens after being parsed. An important strategy of the code completion is: When the cursor (CaretPosition) has not completely left a Token, dt-sql-parser thinks that this Token has not been completed, and the code completion will infer what can be filled in the position of this Token.

For example, if you want to know what to fill in after SHOW through the code completion, the caret position should be:

{
    lineNumber: 1,
    column: 6
}

At this time, dt-sql-parser will think that SHOW is already a complete Token, and it should infer what can be filled in after SHOW. If the column in the passed-in caret position is 5, then dt-sql-parser will think that SHOW has not been completed, and then infer what can be filled in the position of SHOW. In other words, in the figure above, column: 5 belongs to token: 0, and column: 6 belongs to token: 1.

For the editor, this strategy is also more intuitive. After the user enters SHOW, before pressing the space key, the user probably has not finished entering, maybe the user wants to enter something like SHOWS. When the user presses the space key, the editor thinks that the user wants to enter the next Token, and it is time to ask dt-sql-parser what can be filled in the next Token position.


License

MIT