This PhD project aims to adapt Speech Large Language Models (SLLMs) for use with clinical speech. Most current SLLMs are trained on typical speech, so they struggle to process speech affected by medical conditions such as dysarthria, aphasia, or dysphonia. This research will develop multimodal models that can recognize unusual acoustic patterns and convert them into clinically meaningful biomarkers. The project will add paralinguistic encoders that capture features such as irregular rhythm and intonation (prosody), unstable voice production, and articulation differences. Instead of focusing only on speech transcription, the goal is to measure disease severity and track its progression from speech signals. A final part of the project will focus on interpretability so that clinicians can understand how the model makes its decisions. This will include visual tools that highlight the time/frequency regions of speech linked to neurological decline. The project will also address practical challenges in clinical research, such as small datasets and privacy-sensitive medical data.