My main use case for superwhisper is to transcribe meetings using window recording. While the output is already very useful, it would be even better if the result would be split by speaker.
This could be a per-profile configuration, allowing separating a "dictation" use case from a "meeting transcript" use case.
Optional follow-up idea: Allow identifying speakers at timestamps while the recording is ongoing to already tag the result with appropriate speaker names in the output.