Request: Ability to capture and store information online or offline


Often, those who document human rights violations do not have reliable access to the internet. For this reason, it is especially helpful to be able to capture and store information one is collecting on their device offline, with the ability to upload the information securely to a server when internet is available. This is a feature that many Martus users have come to appreciate and use. We want to use this thread to better understand how the way that Martus accomplished this could be replicated, improved or adapted.

@collin proposed these helpful prompts to get the conversation going:

  • In your experience, what situations have you been in that have required offline data collection or access?
  • Do you think Martus handled offline data creation and access well? What would you have changed?
  • What other tools or services offer an offline feature? How do they implement it? Do you like it?


Beyond just storing data offline, it may be useful to consider storing actions, requests and/or responses in more of a message queue model. That way, a user can have a workflow, or send a message, and it will be stored in a way that is both secure, and ready to transmit once connectivity is achieved. What we’ve found with mobile users in limited connectivity scenarios, is that often there is a connection, but it may be limited or not always reliable. If you force the user to retry over and over again, or to remember that they have data stored offline, the burden of that responsibility can cause them to stop using the application, or find a less secure, safe way to communicate and share.

Mostly, we think messaging, ebook reader and podcast apps provide useful capabilities to consider. They don’t assume the user is always connected, and instead cache data offline, queue messages and connect to sync incrementally, as they can or is appropriate. They often also have capabilities that can be used on mobile WAN vs WIFI/LAN in order to manage use of data for people with limited plans, and so on.


Thanks @n8fr8 for this contribution! I fully agree that this desirable - the more you can work and it is sent/synced online without having to retry, checkin, the more likely it is going to be an experience that works well for the user.

I see that proof mode allows you to share through your existing apps, which is really great for some scenarios, and a different approach from what you did with CameraV. I would love to hear more about what made you consider this shift and what you see as potential downsides. Thank you!


There were a number of aspects to our shift to ProofMode from CameraV.

First, sustainability. CameraV had bloated to a huge set of features and functions, and had been built around earlier architectures of the Android operating system. The amount of funding, resources and time it took to sustain this effort was taxing, and not generating the best “return on investment”, especially when considering the niche user base it was valued by.

Architecturally, it had also become a kitchensink app that was trying to do everything, from being a secure camera and gallery, to dynamic form support, and remote archiving, synchronization and backup, which slowed down our ability to innovate on the core premise of generating chain of custody and proof metadata.

ProofMode has enabled us to realize the idea of verifiable metadata generation being nearly an invisible service that is just part of the operating system and core apps. This is really our end goal, and something we hope Google, Apple, Facebook (via Instagram), WhatsApp, Signal and others may eventually adopt. We’re going to be releasing libProofMode soon on Android, and something for iOS in the fall, that will get us closer to that reality.

As for sharing through existing apps, we had many users who, even with CameraV, wanted to share proof data via WhatsApp, or uploading to their own backend services. With the end-to-end encryption and identity verification capabilities provided by apps like WhatsApp and Signal, it makes a great deal of sense, and removes the need for organizations to even host a server. It also reduces the ability for an adversary to monitor a group of users by seeing who is connecting to who or to what back-end server, if all that interaction is mixed into a common public cloud service, over millions or billions of users.

We are also working now with the OpenArchive project to integrate ProofMode support through a number of mechanisms, so that we can enable public sharing to, and future private sharing to self-hosted archives.

As for downsides, well, the kitchen sink approach can work, and provide a total security model against a strong threat model / adversary. The complete vision for CameraV served a niche community of users well, if they were willing to put up with the constraints and usability issues with the app. Ultimately, we hope to still serve them, through a suite of apps, and better operational security training.


I have been increasingly supportive of the modular model after seeing how often folks abandoned kitchen sink tools because they had to entirely abandon their existing usable workflows and tools in order to make use of the platforms that were available. In my opinion a human rights documentation ecosystem that feels like the process you described with CameraV users sharing through a variety of apps is the ideal path forward. It allows users to adopt the components of a HR documentation workflow that they need without having to adopt the entire idealized documentation and analysis paradigm supported by the developers. It also allows the network and/or server layer to be flexible enough to support existing workflows and robust in the face of targeted censorship.

As you pointed out, the challenge of not providing a kitchen-sink is that without control of the entire workflow we lose the ability to guarantee a specific security model for our users. I imagine that this could be addressed by the tool developers, practitioners, and security folks working to provide “threat -> integration/usage” guidance that is written to be actionable and understood by human rights practitioners. This could include modular threat models that discuss how to choose components at different levels of the stack. (A rough example from the top of my head: A “targeted individual” in a region with “active censorship” and “network surveillance capabilities” will want to look at specific tools that provide “censorship circumvention” and “anonymize the destination of the uploaded data.” If that same “targeted individual” had “adversaries” who were interested in getting access to the data, and would have opportunities for physical access to their device (through confiscation, coerced access, etc.) they would want to pick endpoint software that “deleted uploaded files from the device”, “encrypted local data with [a password]”, “encrypted local data with a key the user does not have access to”, and/or “A disguised mode which hides the app when inspected”. ) Of course, this type of “threat -> feature” documentation only works if there are discreet options available at each level of the human rights documentation stack. It would also require that the groups making the different parts of the stack are willing to ensure some level of integration and contribute updates to the documentation as their tools change.