In mmu_insert_pages_no_flush(), when a HUGE_HEAD page is mapped to a 2M aligned GPU address, this is done by creating an Address Translation Entry (ATE) at MIDGARD_MMU_LEVEL(2) (in other words, an ATE covering 2M of memory is created). This is wrong because it assumes that at least 2M of memory should be mapped. mmu_insert_pages_no_flush() can be called in cases where less than that should be mapped, for example when creating a short alias of a big native allocation. Later, when kbase_mmu_teardown_pgd_pages() tries to tear down this region, it will detect that unmapping a subsection of a 2M ATE is not possible and write a log message complaining about this, but then proceed as if everything was fine while leaving the ATE intact. This means the higher-level code will proceed to free the referenced physical memory while the ATE still points to it.
02b7002e9ef87f42111b8b994ec26a71eab28f5f71c23d3899c25a6cc7a85c92