selfcare
verified data inside a permissionless federated learning infrastructure
Problem Statement
Imagine you're an insurance company and want to train predictive models on sensitive healthcare data. You approach a hospital for access to this data, but they are unwilling to disclose patient information. Instead, they offer to train the models themselves without revealing the data. However, how can you trust that the data used for training is real and that the training process is valid? We found a way to do this!
Solution
The solution: TEEs + data ZKPs. Using Self, proofs can be generated at the time of data creation to verify that the data comes from real individuals. The Trusted Execution Environment (TEE) provided by Oasis ensures that these proofs are verified during the training process.This way, as an insurance company, you can be confident that your models are trained properly and on validated, trustworthy data—without ever seeing the raw data yourself.
Hackathon
ETHGlobal Cannes
2025
Prizes
- 🏆
Best Self onchain SDK Integration2nd place
Self Protocol
Contributors
- Mateo00007
11 contributions
- justincanto
2 contributions