Skip to content

Text Generation Quality Improvements #36

@wiseaidev

Description

@wiseaidev

At the moment, the core LMM functionality added in #5 does not predict text based on the topic of a sentence or paragraph because it doesn't have knowledge about it yet. Instead, it selects the most plausible mathematical keywords seeded from the Linux dictionary of words. This is expected, because we have eliminated the concept of training altogether. Our focus is on building ethical AI that extends human work rather than completely replacing humans.

For example, as a Rust developer, I could give the LMM access to a directory of knowledge about Rust books, like the agent examples added in #26, along with internet access, and then ask it to predict, create, or help with Rust-related topics. In that case, the LMM would generate new and novel ideas about that specific domain while still performing at the speed of light.

At this point, the core LMM prediction logic is mature enough to support agents on top of it, which I am currently building. I will keep this issue as a reminder to continue future research and hire engineers and scientists to improve the coherence of text generation.

As mentioned in our whitepaper, the coherence of the text currently resembles the early days of GPTs, e.g. GPT 1, 2, while achieving performance that surpasses any LLM in existence, including GPT-5, as you currently experience in our live demo.

My main focus right now is building agents around it to solve all ARC AGI 3 tasks and raise funds to fix this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestgood first issueGood for newcomershelp wantedExtra attention is neededrustPull requests that update rust code

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions