Vikram Sharma
Vikram Sharma @v4vix ·
Replying to @v4vix
To truly evaluate AI agents, we need to consider their ability to generalize across tasks and domains. This involves assessing their capacity to learn from diverse experiences and adapt to novel situations. #Generalization
9
Jens Eisert
Jens Eisert @jenseisert ·
A PAC-Bayesian approach to generalization for quantum models. We take steps towards non-uniform and data-dependent bounds for generalization of quantum machine learning models. scirate.com/arxiv/2603.229… In detail, #generalization is a central concept in machine learning theory, it is predominantly analyzed through uniform bounds that depend on a model's overall capacity rather than the specific function learned. These capacity-based uniform bounds are often too loose and entirely insensitive to the actual training and learning process. Previous theoretical guarantees have failed to provide #nonuniform, data-dependent bounds that reflect the specific properties of the learned solution rather than the worst-case behavior of the entire hypothesis class. To address this limitation, we derive the first #PACBayesian generalization bounds for a broad class of quantum models by analyzing layered circuits composed of general quantum channels, which include dissipative operations such as mid-circuit measurements and feedforward. Through a channel perturbation analysis, we establish non-uniform bounds that depend on the norms of learned parameter matrices; we extend these results to symmetry-constrained equivariant quantum models; and we validate our theoretical framework with numerical experiments. This work provides actionable model design insights and establishes a foundational tool for a more nuanced understanding of generalization in #quantummachinelearning. Warm thanks to the team of @pablones8, Matthias C. Caro, @EliesMiquel, @FJSchreiber, and @charl_bp for this great collaboration.
2
7
1.7K
AA
AA @AdvAjithAnand ·
Replying to @timesofindia
@timesofindia Hahahha They'll anyways not work for long. They network and then resign. With insights of govt, they begin their own tech startups with added leverages. To satisfy their exam egos, they clear but don't continue for long. #Generalization
2.3K
AI Hot Sheets
AI Hot Sheets @aiHotSheets ·
🔥 LLMs excel at code, but transferring procedural knowledge to natural language tasks is a tough generalization challenge. 🌊 This paper rigorously assesses LLMs' cross-representation generalization of procedures. #AI #LLM #Generalizationarxiv.org/abs/2602.03542ae
arXiv logo
Can Large Language Models Generalize Procedures Across Representations?

Large language models (LLMs) are trained and tested extensively on symbolic representations such as code and graphs, yet real-world user tasks are often specified in natural language. To what...

From arxiv.org
36