The slowdown of the CMOS technology scaling, and the trade-off between efficiency and flexibility have fueled the exploration into novel architectures with emerging post-CMOS technology e.g., resistive-RAM (RRAM). In this article, a nonvolatile fully programmable processing-in-memory (PIM) processor named Liquid Silicon is demonstrated, which combines the superior programmability of general-purpose computing devices e.g., field-programmable gate array (FPGA) and the high efficiency of domain-specific accelerators. Besides the general computing applications, Liquid Silicon is particularly well suited for artificial intelligence (AI)/machine learning and big data applications, which not only poses high computational/memory demand but also evolves rapidly. To fabricate the Liquid Silicon chip, the HfO 2 RRAM is monolithically integrated on top of the commercial 130 nm CMOS. Our measurement confirms that Liquid Silicon chip can operate reliably at a low voltage of 650 mV. It achieves 60.9 TOPS/W in performing neural network (NN) inferences, and 480 GOPS/W in performing content-based similarity search (a key big data application) at a nominal voltage supply of 1.2 V, showing 3x and 100x improvement over the state-of-the-art domain-specific CMOS-/RRAM-based accelerators. In addition, it outperforms the latest nonvolatile FPGA in energy efficiency by 3x in general computing applications.