• Denjin@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Our analysis draws on computational theory and Godel’s First Incompleteness Theorem

    You don’t need any fancy analysis to understand the basic principals of LLMs as a sorting and prediction algorithm that works on large datasets and how they will always produce incorrect results to queries.

    They are not intelligent, they don’t understand or interpret any of the information in their datasets they just guess what you might want to see based on what appears similar.