I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

  • pelya@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 hours ago

    One additinal trick is to compress your files before writing them to disk, using some kind of fast lightweight compression like parallel gzip (pigz command) or lzop. When parsing them, you will have smaller disk reads but higher CPU usage, which will give speed advantage if you have server-class CPU with lots of cache.