Parser predict pairs/next and emits specialized bytecode.
Bytecode is descpecialized at runtime if the prediction was wrong.
Store slot index in hidden control var to avoid key lookups.
Important: this changes the semantics of the write barrier!
Carefully read the big comment block in lj_obj.h
This helps HREFK key slot specialization and allows safely hoisting
HREF/HREFK across GC steps, too (fix for a barely reproducible bug).
Dead keys are only removed during a table resize (as before).
Need to sync GC objects to stack only during atomic GC phase.
Need to setup a proper frame structure only for calling finalizers.
Force an exit to the interpreter and let it handle the uncommon cases.
Finally solves the "NYI: gcstep sync with frames" issue.
Drop call gates. Use function headers, dispatched like bytecodes.
Emit BC_FUNCF/BC_FUNCV bytecode at PC 0 for all Lua functions.
C functions and ASM fast functions get extra bytecodes.
Modify internal calling convention: new base in BASE (formerly in RA).
Can now use better C function wrapper semantics (dynamic on/off).
Prerequisite for call hooks with zero-overhead if disabled.
Prerequisite for compiling recursive calls.
Prerequisite for efficient 32/64 bit prototype guards.
Compile math.sinh(), math.cosh(), math.tanh() and math.random().
Compile various io.*() functions.
Drive the GC forward on string allocations in the parser.
Improve KNUM fuse vs. load heuristics.
Add abstract C call handling to IR.
Fix lua_tocfunction().
Fix cutoff register in JMP bytecode for some conditional expressions.
Fix PHI marking algorithm for references from variant slots.