Although Large Language Models (LLMs) have shown promise for human-like conversations, they are primarily pre-trained on text data. Incorporating audio or video improves performance, but collecting large-scale multimodal data and pre-training multimodal LLMs is challenging. To this end, we propose a Fusion Low Rank Adaptation (FLoRA) technique that efficiently adapts a pre-trained unimodal LLM to consume new, previously unseen modalities via low rank adaptation. For device-directed speech detection, using FLoRA, the multimodal LLM achieves 22{7df079fc2838faf5776787b4855cb970fdd91ea41b0d21e47918e41b3570aafe} relative reduction in equal error rate (EER) over the text-only approach and attains performance parity with its full fine-tuning (FFT) counterpart while needing to tune only a fraction of its parameters. Furthermore, with the newly introduced adapter dropout, FLoRA is robust to missing data, improving over FFT by 20{7df079fc2838faf5776787b4855cb970fdd91ea41b0d21e47918e41b3570aafe} lower EER and 56{7df079fc2838faf5776787b4855cb970fdd91ea41b0d21e47918e41b3570aafe} lower false accept rate. The proposed approach scales well for model sizes from 16M to 3B parameters.