• 0 Posts
  • 7 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle
  • That was an interesting read… The company I currently work for doesn’t allow AI tools to be fully integrated into our code base. I tinker around with them on my own time, but I’m left wondering what the profession is turning into for other people.

    Here on lemmy, we are definitely in the naysayers camp, but this article is trying to paint the picture that the reality is that almost everyone in tech is all on board and convinced these tools are the way. That writing code by hand is something of the past. The author certainly went to great lengths to recount many interviews with people who seem to share this opinion - many who I will note, have a vested interest in AI. Yet they didn’t really ask anyone who specifically held the opposing viewpoint. Only tangentially mentioning that there were opponents and dismissing them as perhaps diluted.

    I did appreciate that they touched on the difference between greenfield projects and brownfield projects and reported that Google only saw about a 10% increase in productivity with this kind of AI workflow.

    Still I wonder what the future holds and suppose it’s still too early to know how this will all turn out. I will admit that I’m more in the naysayers camp, but perhaps that’s from a fear of losing my livelihood? Am I predisposed to see how these tools are lacking? Have I not given them a fair chance?





  • My experience with LLMs for coding has been similar. You have to be extremely vigilant, because they can produce very good code but will also miss important things that will cause disasters. It makes you very paranoid with their output, which is probably how you should approach it and is honestly how you should approach any code that you’re writing or getting from somewhere else.

    I can’t bring my self to actually use them for generating code like he does in this blog post though. That seems infuriating. I find them useful as a way to query knowledge about stuff that I’m interested in which I then cross reference with documentation and other sources to make sure I understand it.

    Sometimes you’re dealing with a particular issue or problem that is very hard to Google for or look up. LLMs are a good starting point to get an understanding of it; even if that understanding could be flawed. I found that it usually points me in the right direction. Though the environmental and ethical implications of using these tools also bother me. Is making my discovery phase for a topic a little bit easier worth the cost of these things?


  • I feel like I’ve found a fairly sweet spot with LLMs and coding. I use them almost entirely like a rubber ducky. Any small bits of code that they do generate I then diligently try to understand, see inefficiencies or issues and come up with a better solution.

    This is a pretty slow process so they don’t speed me up at all. They just make the initial discovery may be a little bit easier? As Google has gotten worse and especially if I’m working in a code base with limited documentation or bad documentation, I found them to be useful to get my bearings. I always go into it with a paranoia that anything they generate could be a disaster so I have to do extra due diligence.

    I’ve found that they’re pretty good at generating a couple lines of code, though I have to usually refine it a lot. Anything more than that seems to be more trouble than it’s worth.

    However, I have found them to be a great tool for learning even if I’m not sure if what they’re putting out is correct. It will usually lead me to start thinking about things and looking deeper into documentation and research papers, etc.


  • I remember when the term was first coined and it meant something like “asking an llm to code and NOT attempting to validate, fix or correct the outputs yourself. Just keep prompting in natural language until it works.” It was supposed to be a joke - this sort of use hits a wall pretty quickly and illustrates how limited llms can be.

    The term has taken off and its meaning is now in flux. I did find it particularly amusing seeing all the LinkedIn lunatics start posting LLM written garbage about “integrating vibe coding Into your workflow” because they thought it was the new buzz word… and I guess they were right.