A recent study has shed light on significant privacy and data protection issues within ChatGPT’s GPT ecosystem. Key Findings The study reveals several alarming practices: Widespread Data Collection: GPTs and their associated Actions (external services) collect extensive user data, often without proper disclosure or consent. ... Read more 26 Aug 2024 - 1 minute read
Aleph Alpha recently announced the launch of Pharia-1-LLM, a new language model series with two 7B foundation models. After reviewing the available materials, including the Model Card, I’ve been able to answer some questions about this release in online discussions. Here’s a breakdown of key points and considerations: Model Characteristics and ... Read more 26 Aug 2024 - 2 minute read
On the persistent issue of factuality hallucinations in Large Language Models (LLMs), a LinkedIn post by Maxime Labonne gave as an example the “Indigo Sock Game” - a non-existent game that, according to him, most models will nonetheless confidently describe when prompted. This phenomenon underscores the ongoing challenges in ensuring LLM reliabi... Read more 23 Aug 2024 - 1 minute read
Recent discussions around the Aidan Bench (https://github.com/aidanmclaughlin/Aidan-Bench) have highlighted the significant impact of temperature settings and sampling methods on benchmark results for large language models (LLMs). Sam Paech’s experiments (https://x.com/sam_paech/status/1823295200724398244) with the GPT-4o-mini model demonstrate... Read more 13 Aug 2024 - less than 1 minute read
A commenter shared previously that deep-fake methods struggle with forging faces from the side (cheeks, ears). Not anymore: In this paper, we present a novel 3D head avatar creation approach capable of generalizing from few-shot in-the-wild data with high-fidelity and animatable robustness. […] we propose a framework comprising prior learning... Read more 13 Aug 2024 - less than 1 minute read