Through microbenchmarks, processor designers have the ability to quickly measure tradeoffs in microarchitectural decisions for a processor design. To avoid the time-consuming and error-prone process of manually hand-crafting microbenchmarks, designers rely on automated microbenchmark generator frameworks that generate synthetic codes for different purposes [1-5]. However, synthetic microbenchmark characteristics -not genuinely derived from real program code- are not very likely to ever occur in real programs. As a result, driving the processor design process using such unrealistic microbenchmarks can lead to over-engineering the final design. Moreover, such non core-representative microbenchmarks can fail to identify performance bottlenecks in the microarchitecture and programs as they are unable to exactly mimic the pipeline behavior at very fine granularity. Therefore, there is a need to systematically generate code-representative microbenchmarks from real program code.