Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

Data Alchemy

37.5k members • Free

Advanced Data Science Society

112 members • $10/m

1225 contributions to Data Alchemy
NVIDIA Open Sources Audio2Face Animation Model
By leveraging large language and speech models, generative AI is creating intelligent 3D avatars that can engage users in natural conversation, from video games to customer service. To make these characters truly lifelike, they need human-like expressions. NVIDIA Audio2Face accelerates the creation of realistic digital characters by providing real-time facial animation and lip-sync driven by generative AI. Today, NVIDIA is open sourcing our Audio2Face technology to accelerate adoption of AI-powered avatars in games and 3D applications. https://developer.nvidia.com/blog/nvidia-open-sources-audio2face-animation-model/
Exciting News: Starting My MBA in Data Science!
Hi everyone! I’m really excited to share some good news. I’ve been selected to join an MBA program in Data Science starting this October! I can’t wait to begin this journey and wanted to celebrate this win with you all.
1 like • 1d
@Ana Crosatto Thomsen congratulations!!! That's great news! 🎉🙌.
Is it just me or...
I have noticed a certain decline in content posting for the last few weeks and I am not sure if it's me who finds less interest on what is posted or it is a real thing here. I have been in similar places before, so I know there are ups and downs, and I feel we're on one. What do you think?
1 like • 2d
@Oriol Fort it may be just because of summer. People are probably busy enjoying the weather outside and spending less time online. That's what I've been doing... 😉😎.
RunPod and Hugging Face access token
Where to add Hugging Face access token to a serverless RunPod endpoint?
0 likes • 2d
@Am. Mayed https://huggingface.co/blog/airabbitX/deploy-hf-private-model
Teaching LLMs to Plan: Logical Chain-of-Thought Instruction Tuning for Symbolic Planning
"Large language models (LLMs) have demonstrated impressive capabilities across diverse tasks, yet their ability to perform structured symbolic planning remains limited, particularly in domains requiring formal representations like the Planning Domain Definition Language (PDDL). In this paper, we present a novel instruction tuning framework, PDDL-Instruct, designed to enhance LLMs' symbolic planning capabilities through logical chain-of-thought reasoning. Our approach focuses on teaching models to rigorously reason about action applicability, state transitions, and plan validity using explicit logical inference steps. By developing instruction prompts that guide models through the precise logical reasoning required to determine when actions can be applied in a given state, we enable LLMs to self-correct their planning processes through structured reflection. The framework systematically builds verification skills by decomposing the planning process into explicit reasoning chains about precondition satisfaction, effect application, and invariant preservation. Experimental results on multiple planning domains show that our chain-of-thought reasoning based instruction-tuned models are significantly better at planning, achieving planning accuracy of up to 94% on standard benchmarks, representing a 66% absolute improvement over baseline models. This work bridges the gap between the general reasoning capabilities of LLMs and the logical precision required for automated planning, offering a promising direction for developing better AI planning systems. https://arxiv.org/abs/2509.13351
0
0
1-10 of 1,225
Marcio Pacheco
7
1,117points to level up
@marcio-pacheco-6005
Tech & Advertising Entrepreneur based in Seattle.

Active 4h ago
Joined Jan 24, 2024
Seattle, WA USA
Powered by