"Tune2fs -l" will tell you if the kernel has noticed file system corruption issues while it is running. For example, if you ask ext4 to delete a file, and ext4 discovers that some of the blocks in that file were already marked as deallocated, that means that the allocation bitmap is corrupt. Note that the allocaiton bitmap was already corrupt at the time when ext4 discovered it. In fact, it could have been corrupt for days or weeks, and if you had been writing new files, it's possible that ext4 might have allocated blocks for new files that were in used for older files, and the user may have lost data as a result.
The only way to reliablly say for certain whether or not a file system is consistent or might have some amount of corruption is to run e2fsck on it. Doing this requires that either the file system be unmounted, or creating a read-only snapshot. (If you are using LVM, you can create a read-only snapshot, check the the read-only snapshot, and then if the file system is found to be corrupt, you can either reboot the system and let e2fsck fix the file system, or send e-mail to the system administrator to schedule downtime to fix the file system.)
All of this being said, if the file system has gotten corrupt, it's likely because of a hardware issue as the most common case. It's possible that it could be because of a kernel bug, although I do periodically run regression tests on the stable kernels, not just on upstream, and we haven't had a fs corruption problem in a very long time. It is possible that there might be a memory corruption bug in a device driver, and either (a) the device driver isn't upstream, and the hardware vendor didn't do proper quality control, or (b) the bug was fixed upstream, and even pushed to the latest stable kernel, but the device kernel wasn't taking updates from the stable kernel series.
Note that if you are looking to see if the file system was found to be corrupt because the kernel tripped over something blatently wrong, you don't have to just scrape dmesg or /var/log/messages. You can also try reading the file /sys/fs/ext4//first_error_time. If that file contains a non-zero value, that will tell you the time (using the Unix epoch) that a file system corruption was detected by the kernel. The errors_count file in that directory will tell you how many file system corruptions have been detected (but that can just be the system tripping over the same problem over and over again). Also of interest is, if you want to test how your system is handling file system errors being detected by the kernel, you can try writing a string to the trigger_fs_error file --- e.g., echo "test error" > /sys/fs/ext4/sda1/trigger_fs_error"
Finally, please take a look at the errors behaviour knob which you can set in tune2fs. It may be, if you want to really make sure that more damage isn't done after a file system corruption issue has been detected, that you want to configure the file system to remount itself read-only when a problem is found --- or maybe just force a reboot, so that e2fsck can be run during the boot sequence to fix a problem before (even more) user data gets corrupted or lost.