This study investigates gender bias in neural machine translation from English to Spanish and proposes a practical mitigation strategy. We first measure baseline translations and find that occupation-related sentences often skew toward stereotypical gender forms. To address this, we compile a small, balanced corpus of masculine and feminine contexts and fine-tune a high-quality baseline model using parameter-efficient adapters. The adapted model markedly improves gender agreement while maintaining overall translation quality, as measured by standard metrics. Our experiments demonstrate that gender accuracy improves substantially for feminine cases (from 43% to 69% in feminine examples), while the bias gap decreases from 0.41 to 0.06. This lightweight approach combines targeted test sentences with transparent scoring and error analysis, offering an accessible framework for detecting and mitigating gender bias in translation systems. The method is easily transferable to other language pairs that require grammatical gender agreement and is well-suited for research or production environments with limited data and computing resources.