• farsinuce@feddit.dk
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    Der skal en helt anden arkitektur til nu for at kunne gøre det anderledes.

    Enig. Ræsonnerende sprogmodeller er et skridt fremad, men det er ikke nogen revolution. De vil nok mest være en hjælp til analysearbejde og andre former for mønstergenkendelse. Ikke særligt clickbait-egnet …

    Tilføjet:

    Transformer+Attention paper er alligevel helt tilbage fra 2017, og det var først i slut-2022, at ChatGPT 3.5 udkom og blev populært:

    Key Milestones in LLM Evolution

    2017: Foundation Era

    • Transformer architecture introduced in “Attention Is All You Need” paper, enabling parallel text processing

    2018-2019: Early Models

    • BERT and GPT-2 demonstrate the potential of pre-training on large text corpora
    • OpenAI initially delayed GPT-2 release due to misuse concerns

    2020-2021: Scaling Revolution

    • GPT-3 (175B parameters) reveals emergent abilities through massive scaling
    • Models begin showing capabilities not explicitly trained for

    2022: Alignment Breakthrough

    • ChatGPT introduces RLHF (Reinforcement Learning from Human Feedback)
    • Consumer adoption explodes with conversational interface

    2023: Multimodal Expansion

    • Models gain vision capabilities (GPT-4V, Claude Vision)
    • Open-source alternatives gain traction (Llama, Mistral)

    2024: Reasoning Emergence

    • OpenAI’s o1 introduces dedicated “thinking” capabilities
    • Models demonstrate improved complex problem-solving by generating internal reasoning chains