Category

    Alignment Lab AI Releases 'Buzz Dataset': The Largest Supervised Fine-Tuning Open-Sourced Dataset
    Alignment Lab AI Releases ‘Buzz Dataset’: The Largest Supervised Fine-Tuning Open-Sourced Dataset

    Language models, a subset of artificial intelligence, focus on interpreting and generating human-like text. These models are integral to various …

    read more
    How 'Chain of Thought' Makes Transformers Smarter
    How ‘Chain of Thought’ Makes Transformers Smarter

    Large Language Models (LLMs) like GPT-3 and ChatGPT exhibit exceptional capabilities in complex reasoning tasks such as mathematical problem-solving and …

    read more
    FastGen: Cutting GPU Memory Costs Without Compromising on LLM Quality
    FastGen: Cutting GPU Memory Costs Without Compromising on LLM Quality

    Autoregressive language models (ALMs) have proven their capability in machine translation, text generation, etc. However, these models pose challenges, including …

    read more
    Don't overlook the impact of AI on data management
    Don’t overlook the impact of AI on data management

    Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing …

    read more
    Are you behind when it comes to generative AI?
    Are you behind when it comes to generative AI?

    Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing …

    read more
    QoQ and QServe: A New Frontier in Model Quantization Transforming Large Language Model Deployment
    QoQ and QServe: A New Frontier in Model Quantization Transforming Large Language Model Deployment

    Quantization, a method integral to computational linguistics, is essential for managing the vast computational demands of deploying large language models …

    read more
    Researchers from Princeton and Meta AI Introduce 'Lory': A Fully-Differentiable MoE Model Designed for Autoregressive Language Model Pre-Training
    Researchers from Princeton and Meta AI Introduce ‘Lory’: A Fully-Differentiable MoE Model Designed for Autoregressive Language Model Pre-Training

    Mixture-of-experts (MoE) architectures use sparse activation to initial the scaling of model sizes while preserving high training and inference efficiency. …

    read more
    THRONE: Advancing the Evaluation of Hallucinations in Vision-Language Models
    THRONE: Advancing the Evaluation of Hallucinations in Vision-Language Models

    Understanding and mitigating hallucinations in vision-language models (VLVMs) is an emerging field of research that addresses the generation of coherent …

    read more
    Safe Marine Navigation Using Vision AI: Enhancing Maritime Safety and Efficiency
    Safe Marine Navigation Using Vision AI: Enhancing Maritime Safety and Efficiency

    Maritime transportation has always been pivotal for global trade and travel, but navigating the vast and often unpredictable waters presents …

    read more
    KnowHalu: A Novel AI Approach for Detecting Hallucinations in Text Generated by Large Language Models (LLMs)
    KnowHalu: A Novel AI Approach for Detecting Hallucinations in Text Generated by Large Language Models (LLMs)

    The power of LLMs to generate coherent and contextually appropriate text is impressive and valuable. However, these models sometimes produce …

    read more