In the pursuit of environmental sustainability, the development of energy-efficient artificial intelligence (AI) models becomes imperative. This work presents a novel approach to bioacoustic monitoring using self-supervised learning (SSL) of bird sound representations. We employ MobileNetV3, a lightweight deep learning model designed to focus on low computational cost while maintaining high performance. Our method leverages the capability of SSL to exploit unlabeled data, significantly reducing the dependency on extensive labelled datasets that is costly. By choosing appropriate data augmentation techniques to train MobileNetV3 in a self-supervised manner, we extract informative features from the data. These features achieve robust few-shot learning capabilities, enabling accurate bird species recognition with one labeled example per class. The application of such a lightweight model not only mitigates the environmental impact associated with the training of large-scale AI models but also enhances the feasibility of deploying AI solutions in resource-constrained environments. In future work, we aim to extend the application of our method to the large bird sound collection Xeno-Canto, which contains recordings from over 10,000 species. This scalability will allow us to learn more robust features, potentially enhancing the generalizability and effectiveness of our model across a broader spectrum of bioacoustic challenges. Additionally, by employing model compression techniques such as distillation, we can further reduce the model size, enabling the development of an ultra-compact model that is well-suited for deployment on small devices.