I am aware of three sorts of floating point semantics.
The earliest machines described the individual real steps performed by the real hardware and with this information it was easy predict the resulting values given arbitrary inputs. The timing was also thus specified. I saw this first in the IBM 704.
More recently one ascribes a particular rational number to each ‘non exceptional’ binary memory pattern. The semantics of an add, multiply or divide need only claim that the result is a floating point value whose rational denotation is the closest to the same operation on the rationals (modulo some rounding details). This sort of description is used by both the IBM 360 and IEEE floating point semantics.
Cray’s 6600 ascribed an interval of reals for the “meaning” of a floating point number; the bits to the right of the bits represented in memory were taken to be all unknown. All such infinite fractions were taken to constitute the values “represented” by the memory form of the floating point number. For any floating point hardware operation the floating operands were extended at the right end by one 1 bit which then behaved arithmetically as the number in the middle of the represented range. After the operation and normalization, all bits that wouldn’t fit in the memory format were discarded. In short the 6600 rounded the inputs to floating operations, not the output. A somewhat non trivial theorem was that for ordinary arithmetic on integers converted to floating point in the obvious way, got the ‘right’ integer result. The 6600 semantics was not categorical; some combinations of inputs allowed more than one output. Nonetheless it was possible to reason about accuracy of results. The reasoning was somewhat different from that possible with IEEE style semantics.