• Doomsider@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    My five thousand line bash script can do things that one hundred thousand lines of code could not do.

    On the brightside, at least script monkeys can now look down on vibe coders.

  • KSP Atlas@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    I think the most I’ve done with awk is write a battery monitor applet with it, it involved parsing data from /sys and making choices based on it so I decided it was a decent choice

  • grrgyle@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    Hey I throw a /^regexp.*/ {print $NF} in there sometimes!

    …but yes, it’s mostly print $1—but only because I mix up the parameters whenever I try to use cut!

  • Lauchmelder@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    Why spend 30 seconds manually editing some text when you can spend 30 minutes clobbering together a pipeline involving awk, sed and jq

    • freijon@lemmings.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Or 3 minutes clobbering a pipeline in nushell. Yes, I’m a nushell fanboy. I still over-automate everything, but with nushell it’s actually fun.

      • Tangent5280@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 days ago

        The important part is to learn the limits of any tool. Nowadays I no longer use jq for any long or complicated tasking. Filter and view data? jq is fine. Anything more and I just cook up a python script.

          • Tangent5280@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 days ago

            How do you get complex data structures to work? I was alienated from scripting on zsh because I wanted something like a dict and realised I would have to write my own implementation. Is there a work around for that?

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              5 days ago

              I mean, there’s a point in data structure complexity where it’s useful to use Python.

              But as to dicts, sure. You’re looking for zsh’s “associative array”. Bash has it too.

              zsh

              $ typeset -A mydict
              $ mydict[foo]=bar 
              $ echo $mydict[foo]
              bar
              $
              

              bash

              $ typeset -A mydict
              $ mydict[foo]=bar
              $ echo ${mydict[foo]}
              bar
              $
              
              • Tangent5280@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                5 days ago

                This will do nicely - I had several workflows where I’d hit an API and get a massive super nested JSON as output; I’d use jq to get the specific data from the whole thing and do a bunch of stuff on this filtered data. I pretty much resigned to using python because I’d have successively complicated requirements and looking up how to do each new thing was slowing me down massively.

    • Laurel Raven@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      This is definitely somewhere that PowerShell shines, all of that is built in and really easy to use

      • Laser@feddit.org
        link
        fedilink
        arrow-up
        0
        ·
        5 days ago

        People are hating on Powershell way too much. I don’t like its syntax really but it has a messy better approach to handling data in the terminal. We have nu and elvish nowadays but MS was really early with the concept and I think they learned from the shortcomings of POSIX compatible shells.

        • Laurel Raven@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          I really can’t stress enough how much power and flexibility comes with an object oriented shell, especially with the dotnet type system behind it.

          I think most people who hate it just do so either because it came from Microsoft (which… Yeah, that’s understandable), or because it’s a different way of thinking about it (and/or they spent a lot of effort learning how to parse data from strings effectively and hate that it’s made easier?). But love or hate it, it is effective and powerful, and I find myself missing that when working with bash.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        To be fair, a lot of the programs don’t use a single character, have multiple spaces between fields, and cut doesn’t collapse whitespace characters, so you probably want something more like tr -s " "|cut -d" " -f3 if you want behavior like awk’s field-splitting.

        $ iostat |grep ^nvme0n1
        nvme0n1          29.03       131.52       535.59       730.72    2760247   11240665   15336056
        $ iostat |grep nvme0n1|awk '{print $3}'
        131.38
        $ iostat |grep nvme0n1|tr -s " "|cut -d" " -f3
        131.14
        $
        
        • ThunderLegend@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          This is awesome! Looks like an LPI1 textbook. Never got the certification but I’ve seen a couple books about it and remember seeing examples like this one.

        • TechLich@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 days ago

          I never understood why so many bash scripts pipe grep to awk when regex is one of its main strengths.

          Like… Why

          grep ^nvme0n1 | awk '{print $3}'

          over just

          awk '/^nvme0n1/ {print $3}'

          • FooBarrington@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 days ago

            Because by the time I use awk again, I’ve completely forgotten that it supports this stuff, and the discoverability is horrendous.

  • lime!@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    my favorite awk snippet is !x[$0]++ which is like uniq but doesn’t care about order. basically, it’s equivalent to print_this_line = line_cache[$current_line] == 0; line_cache[$current_line] += 1; if $print_this_line then print $current_line end.

    really useful for those long spammy logs.

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      Oh that’s very interesting. I usually do sort --unique or sort [...] | uniq if I need specific sorting logic (like by size on disk, etc).

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        Looking at the above awk snippet, it’ll retain order, though. So, sort will normally change the order. The awk snippet won’t, just skip occurrences of a given line after the first. Depending upon the use case, that order retention could be pretty desireable.

  • DreamButt@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    In all my years I’ve only used more than that a handful of times. Just don’t need it really

    Now jq on the other hand…