You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: bin/options.json
+9-1Lines changed: 9 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -64,6 +64,14 @@
64
64
"--escape":{
65
65
"desc":"escape character used in quoted column. Default is double quote (\") according to RFC4108. Change to back slash (\\) or other chars for your own case.",
66
66
"type":"string"
67
+
},
68
+
"--ignoreColumns": {
69
+
"desc": "Columns to ignore on input. e.g. --ignoreColumns=# --ignoreColumns='[0,4,5]' ",
70
+
"type": "~object"
71
+
},
72
+
"--includeColumns": {
73
+
"desc": "Columns to include on input. e.g. --includeColumns=# --includeColumns='[0,4,5]' ",
Version 1.1.0 has added new features and optimised lib performance. It also introduced simpler APIs to use. Thus readme is re-written to adapt the preferred new APIs. The lib will support old APIs. To review the old readme please [click here](https://github.com/Keyang/node-csvtojson/blob/develop/readme-old.md).
20
+
Version 1.1.0 has added new features and optimised lib performance. It also introduced simpler APIs to use. Thus readme is re-written to adapt the preferred new APIs. The lib will support old APIs. To review the old readme please [click here](https://github.com/Keyang/node-csvtojson/blob/develop/readme-old.md).
21
21
22
22
*[Performance Optimisation](https://github.com/Keyang/node-csvtojson/blob/develop/docs/performance.md#performance-optimisation): V1.1.0 is 30%-50% faster
23
23
* Better error tolerance
@@ -174,7 +174,7 @@ Convert csv file and save result to json file:
***delimiter**: delimiter used for seperating columns. Use "auto" if delimiter is unknown in advance, in this case, delimiter will be auto-detected (by best attempt). Use an array to give a list of potential delimiters e.g. [",","|","$"]. default: ","
239
239
***quote**: If a column contains delimiter, it is able to use quote character to surround the column content. e.g. "hello, world" wont be split into two columns while parsing. Set to "off" will ignore all quotes. default: " (double quote)
240
240
***trim**: Indicate if parser trim off spaces surrounding column content. e.g. " content " will be trimmed to "content". Default: true
241
-
***checkType**: This parameter turns on and off whether check field type. default is true.
241
+
***checkType**: This parameter turns on and off whether check field type. default is true.
242
242
***toArrayString**: Stringify the stream output to JSON array. This is useful when pipe output to a file which expects stringified JSON array. default is false and only stringified JSON (without []) will be pushed to downstream.
243
243
***ignoreEmpty**: Ignore the empty value in CSV columns. If a column value is not giving, set this to true to skip them. Defalut: false.
244
244
***workerNum**: Number of worker processes. The worker process will use multi-cores to help process CSV data. Set to number of Core to improve the performance of processing large csv file. Keep 1 for small csv files. Default 1.
@@ -249,6 +249,8 @@ Following parameters are supported:
249
249
***checkColumn**: whether check column number of a row is the same as headers. If column number mismatched headers number, an error of "mismatched_column" will be emitted.. default: false
250
250
***eol**: End of line character. If omitted, parser will attempt retrieve it from first chunk of CSV data. If no valid eol found, then operation system eol will be used.
251
251
***escape**: escape character used in quoted column. Default is double quote (") according to RFC4108. Change to back slash (\\) or other chars for your own case.
252
+
***includeColumns**: This parameter instructs the parser to include only those columns as specified by an array of column indexes. Example: [0,2,3] will parse and include only columns 0, 2, and 3 in the JSON output.
253
+
***ignoreColumns**: This parameter instructs the parser to ignore columns as specified by an array of column indexes. Example: [1,3,5] will ignore columns 1, 3, and 5 and will not return them in the JSON output.
252
254
253
255
All parameters can be used in Command Line tool.
254
256
@@ -311,7 +313,7 @@ csv()
311
313
})
312
314
```
313
315
314
-
Note that if `error` being emitted, the process will stop as node.js will automatically `unpipe()` upper-stream and chained down-stream<sup>1</sup>. This will cause `end` / `end_parsed` event never being emitted because `end` event is only emitted when all data being consumed <sup>2</sup>.
316
+
Note that if `error` being emitted, the process will stop as node.js will automatically `unpipe()` upper-stream and chained down-stream<sup>1</sup>. This will cause `end` / `end_parsed` event never being emitted because `end` event is only emitted when all data being consumed <sup>2</sup>.
2.[Writable end Event](https://nodejs.org/api/stream.html#stream_event_end)
@@ -367,7 +369,7 @@ csv()
367
369
cb(newData);
368
370
})
369
371
.on('json',(jsonObj)=>{
370
-
372
+
371
373
});
372
374
```
373
375
@@ -385,7 +387,7 @@ csv()
385
387
return fileLineString
386
388
})
387
389
.on('json',(jsonObj)=>{
388
-
390
+
389
391
});
390
392
```
391
393
@@ -464,7 +466,7 @@ Using csvtojson to convert, the result would be like:
464
466
},
465
467
"description": "Awesome castle"
466
468
}]
467
-
```
469
+
```
468
470
469
471
### No nested JSON
470
472
@@ -490,7 +492,7 @@ csv({flatKeys:true})
490
492
491
493
1. First row of csv source. Use first row of csv source as header row. This is default.
492
494
2. If first row of csv source is header row but it is incorrect and need to be replaced. Use `headers:[]` and `noheader:false` parameters.
493
-
3. If original csv source has no header row but the header definition can be defined. Use `headers:[]` and `noheader:true` parameters.
495
+
3. If original csv source has no header row but the header definition can be defined. Use `headers:[]` and `noheader:true` parameters.
494
496
4. If original csv source has no header row and the header definition is unknow. Use `noheader:true`. This will automatically add `fieldN` header to csv cells
495
497
496
498
@@ -545,7 +547,7 @@ See [here](https://github.com/Keyang/node-csvtojson/blob/develop/docs/performanc
545
547
546
548
There are some limitations when using multi-core feature:
547
549
548
-
* Does not support if a column contains line break.
550
+
* Does not support if a column contains line break.
549
551
550
552
#Change Log
551
553
@@ -641,5 +643,3 @@ There are some limitations when using multi-core feature:
641
643
* Deprecated applyWebServer
642
644
* Added construct parameter for Converter Class
643
645
* Converter Class now works as a proper stream object
0 commit comments