• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle


  • Next.js is a highly opinionated framework. “Our way or the highway” is what should be expected going in. Good luck if your requirements change later on, and I hope your code is transferrable to a new framework if needed.

    Unfortunately, I have never need to follow “our way” because my projects are more complex than whatever basic blog setup they document. I always end up just building my own stack around Vite. I’m also not much of a fan of fighting against my tools when what I need isn’t something the tool devs already thought of.




  • For programming languages? I don’t need many features as long as what exists is enough to do everything I need. In fact, the less, the better (or you end up with C++'s regex/Python’s urllibN/etc).

    I guess that means that I’d end up more on the documentation side, though my reason isn’t because I want the most documented language of all time, but because I want the fewest built-in features.

    This is why I mostly write Rust when given the option. I write a lot of Python, but I hate the standard library so much. There’s the urllib stuff, plus there’s a bunch of deprecated stuff in the base64 module, plus I can’t stand Python’s implementation of async (coroutines are cool but asyncio is miserable to use imo).

    Edit: Oh, and nobody’s giving integers only when nuanced answers are more interesting to discuss.


  • It would be convenient to be able to clone the task management repository alongside the code repository, too.

    FYI it’s entirely possible to track multiple branches with unrelated histories in git. Both the task management and code can live in the same repository.

    You could theoretically even give each task an ID, then tag commits in the code that complete those tasks with those task IDs.

    This gives me some ideas for how it could be done that might be fun to explore. It comes with the benefit of not being platform-specific (GitHub/GitLab/etc). Merge conflicts might be annoying, but maybe there’s a way to take advantage of Git’s tree to represent relationships between tasks, and have one “HEAD” commit (or branch or other ref) per task? Not sure how this will work honestly.


  • TehPers@beehaw.orgtoProgramming@programming.devLong Names Are Long
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    17 days ago

    Though in some contexts, I might prefer a name like employeeToRole for a Map<Employee, Role> over the article’s employeeRoles.

    Following the article, the former describes the type (a map from employees to roles) while the latter describes the relationship (these are the employees’ roles).

    At the same time, I’m not pedantic enough to care in practice. Both are fine, and I’ll get the point when I read either of those.

    I agree with the author here that names don’t need to be so verbose, but I also think there needs to be a balance. out or req or res are clear with context, but output, request, and response are always clear and not bad to write. http_response adds extra unnecessary info (unless there’s another response variable in scope).

    It also helps when languages support local variable shadowing. For example:

    let response = foo();
    let response = validate_response(response);
    

    Both of these are responses, and fundamentally the same thing being passed around while being mutated. The types are (potentially) different. You won’t use the first variable ever again (and in Rust, likely can’t). Just reuse the name.


  • A pretty good way to get a code review is to post the code on GitHub and make a post advertising it as a tool everyone needs. People will be quick to review it.

    As far as LLMs go, they tend to be too quick to please you. It might be better to ask it to generate code that does something similar to what you’re doing, then compare the two to see if you learn anything from it or if it does something in a better way than how your code does it.



  • TehPers@beehaw.orgtoProgramming@programming.devAI Can't Help You Write Well
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    22 days ago

    The only person who can answer whether a tool will be useful to you is you. I understand that you tried and couldn’t use it. Was it useful to you then? Seems like no.

    Broad generalizations of “X is good at Y” rarely can be accurately measured with a useful set of metrics, rarely are studied using sufficiently large sample sizes, and often discredit the edge cases where someone might find it useful or not useful despite the opposite being found generally true in the study.

    And no, I haven’t tried it. It wouldn’t be good at what I need it to do: think for me.






  • Guess I’ll post another update. The block-based data structure makes no sense to me. At some point it claims that looking up a pair in the data structure is O(1):

    To delete the key/value pair ⟨a,b⟩, we remove it directly from the linked list, which can be done in O(1) time.

    This has me very confused. First, it doesn’t explain how to find which linked list to remove it from (every block is a linked list, and there are many blocks). You can binary search for the blocks that can contain the value and search them in order based on their upper bounds, but that’d be O(M * |D_0|) just to search the non-bulk-prepended values.

    Second, it feels like in general the data structure is described primarily from a theoretical perspective. Linked lists here are only solid in theory, but from a practical standpoint, it’s better to initialize each block as a preallocated array (vector) of size M. Also, it’s not clear if each block’s elements should be sorted by key within the block itself, but it would make the most sense to do that in my opinion, cutting the split operation from O(M) to O(1), and it’d answer how PULL() returns “the smallest M values”.

    Anyway, it’s possible also that the language of the paper is just beyond me.

    I like the divide-and-conquer approach, but the paper itself is difficult to implement in my opinion.




  • In case anyone’s curious, still working on it. It’s not as simple as something like Dijkstra’s algorithm.

    What’s really interesting is the requirement that it seems to place on the graph itself. From what I can tell, the graph it wants to use is a graph where each node has a maximum in-degree of 2 and maximum out-degree of 2, with a total degree of no greater than 3. A traditional di-graph can be converted to this format by splitting each node into a strongly connected cycle of nodes, with each node in the cycle containing the in-edge and out-edge needed to maintain that cycle (with weights of 0) plus one of the previous edges.

    Theorerically, this invariant can be held by the graph data structure itself by adding nodes as needed when adding edges. That’s what my implementation is doing to avoid having the cost of converting the graph each time you run the algorithm. In this case, one of these node cycles represents your higher level concept of a node (which I’m calling a node group).

    The block-based list is also interesting, and I’ve been having trouble converting it to a data structure in code. I’m still working through the paper though, so hopefully that isn’t too bad to get done.