Attention-based architectures are a powerful force in modern AI. In particular, the emergence of in-context learning abilities enables task generalization far beyond the original next-token prediction ...
We briefly discuss the quark-antiquark Bethe-Salpeter equation and the quark Dyson-Schwinger equation derived in preceding papers. We also consider the q-qbar quadratic mass operator M^{2} = (w_{1} + ...
Attention-based neural network sequence models such as transformers have the capacity to act as supervised learning algorithms: They can take as input a sequence of labeled examples and output ...
Abstract: This paper considers the problem of piecewise linear prediction from a competitive algorithm approach. In prior work, prediction algorithms have been developed that are "universal" with ...
Abstract: In the context of “Waveform Diversity” this paper presents a comparison between linear frequency modulated waveforms and non-linear frequency-modulated waveforms for dual-channel time ...