Multi-task learning (MTL) seeks to learn a single model to accomplish multiple tasks by leveraging shared information between the tasks. Existing MTL models, however, have also been known to suffer from negative interference among tasks. Efforts to mitigate task interference have focused on either loss/gradient balancing or implicit parameter partitioning with partial overlaps between the tasks. In this paper, we propose Explicit Task Branching with Non-Learnable Primitives to mitigate task interference through a synergistic combination of non-learnable primitives (NLPs) and explicit task-branching (ETaB). Our key idea is to employ non-learnable primitives to extract a diverse set of task-agnostic features and recombine them into a shared branch common to all tasks and explicit task-specific branches reserved for each task. The non-learnable primitives and the explicit decoupling of learnable parameters into shared and task-specific ones afford the flexibility needed for minimizing task interference. We evaluate the efficacy of ETaB-NLP networks for both image-level classification and pixel-level dense prediction MTL problems. Experimental results indicate that ETaB-NLP significantly outperforms state-of-the-art baselines with fewer learnable parameters and similar FLOPS across all datasets.