If you're interested in lazy + pessimistic transactional memory, you can read this paper Pi-TM: Pessimistic Invalidation for Scalable Lazy Hardware Transactional Memory
Haskell's GHC runtime implements a lazy, pessimistic STM system that is checked at the type level (https://wiki.haskell.org/Software_transactional_memory). This lets threads communicate via transactions that (from the outside) look "atomic." Its performance characteristics aren't the best, but it does make it much easier to write that kind of concurrent code correctly.
Similarly, Rust makes a TM system invisible to the programmer, by verifying at compile-time that only one thread has a usable write reference to a given resource at any one time. The only parts where a programmer would have to specifically think about TM would be in unsafe blocks, or when implementing some of the underlying concurrency primitives. Outside of those cases, race conditions are guaranteed not to happen. (And unlike Haskell, Rust is pretty fast).
How does eager and optimistic work compared to lazy and optimistic?
In eager and optimistic you would keep writing to memory directly compared to lazy in which you would only write in the end. Commits are easier in the first. Aborts are easier in the second.
Why is eager + optimistic not practical? Is it because the aborts are both costly and frequent?
Eager versioning writes directly to memory and keeps an undo log, and when you abort, you go through each entry in the undo log and restore it.
If there's a write-write conflict between two threads, aborting is more complicated because in the thread that aborts, you'll have to make sure that you don't restore the entries in the undo log that have been written to by the thread that commits.
I think this is why under software TM systems, instead of Eager+Optimistic, you see Eager+Optimistic(Rd)/Pessimistic(Wr). I'm also guessing that this logic isn't practical to implement in hardware, but I don't know enough about hardware to be sure.