17:00:26 #startmeeting OONI weekly gathering 2016-08-29 17:00:26 Meeting started Mon Aug 29 17:00:26 2016 UTC. The chair is hellais. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:00:37 so, let's get this started 17:01:04 hellais: in Soviet Russia you'll rather have github blocked (happened several times) than TorProject (never happened to the date) :-) 17:01:29 #topic Discuss the update strategy from 1.x -> 2.x 17:01:43 darkk: but eventually it got unblocked? 17:02:02 hellais: true, just kidding 17:02:11 darkk: lol 17:02:19 :P 17:02:29 I think that if they blocked OONI/Tor servers updates will be our least of our conserns :) 17:02:46 I'm serious about blocked github/wikipedia, but it was always unblocked in a day or so. 17:03:05 So it's not a reason to avoid them. 17:03:18 anadahz: what other concerns do you have if OONI/Tor servers are blocked? 17:03:43 I think it's important to take the fingerprintability component seriously into consideration 17:03:45 because it's a fact that they are blocked in many of the places where we operate and we have various ways of circumventing the blocking 17:04:01 cloudfronting, tor bridges, etc. 17:04:30 hellais: that we 'll no be able to receive OONI measurements 17:04:58 anadahz: we will, they can be submitted either via cloudfronting or tor hidden service (with bridges) 17:05:41 hellais: yeah but is there built in support that fallbacks to cloudfronting? 17:05:51 agrabeli: +1 17:06:08 anadahz: for collectors and test helpers yes. 17:06:37 hellais: :) 17:06:46 I don't remember if it's also done for the bouncer, but if it's not yet implemented it's part of the design 17:06:51 but not for ooniprobe 17:07:04 hellais: previously you mentioned of a 1.6.0.1 update that also adds the script for updating lepidopter, can you say more on that? 17:07:19 yeah sorry we verted a bit off topic 17:07:31 so let me provide first a tiny bit of context around this 17:08:31 so, for the past week anadahz and darkk have been doing a feasability study for implementing automatic un-attending full OS/Image update of lepidopter. 17:09:02 based on their analysis it seems apparent that whatever solution we are going to integrate, extend, implement, deploy will require a considerable amount of effort and time. 17:10:07 moreover even if we are to go for the full OS update system, there are still some pi's out there that we will probably not be able to ship a new SD card to and hence we would still need to come up with some update system to trigger updates in them 17:10:38 currently our update mechanism is based on some cronjob that run pip install --upgrade every 7 days and that is the only vector we can use to provide updates to all raspberry pi images. 17:11:02 this means we have to work with what we have and come up with something that can work for them as well 17:11:49 I'd say `deploy` is the hardest part as soon as on-the-fly migration for this PIs to any new scheme having OS-wide updates. Everything else _may_ be done in a similar sketchy way. 17:12:26 given this I believe it's best to reprioritise our work for the upcoming week on a minimal update system that will work by updating only the ooniprobe software, provisioning the rapsberry pi images with the new software update mechanism and then handle the update to the 2.x series via this update mechanism 17:12:52 paralell to this we should begin integrating 2.x into a new raspberry pi image that ships natively the update mechanism and the GUI 17:13:39 don't the rpi's now already do a apt-get upgrade cron against the ooniprobe software? 17:14:13 darkk: the idea is that at that in the immediate future we would not be doing on-the-fly migration to OS-wide updates. Instead we would do it in 2 steps: 1. Ship an updater via the pypi vector 2. Update to 2.x via the updater 17:14:27 willscott: no ooniprobe is being installed via pip, since there were no recente debian packages of ooniprobe 17:14:45 anadahz: but does it periodically re-pip-install it? 17:14:57 regarding on the fly update of the PIs, why wouldn't it be enough to run apt-get to fully update the OS (minus OONI)? 17:15:00 willscott: right 17:15:15 isn't that enough? 17:15:16 willscott: yes, however they actually don't do apt-get upgrade, but they do pip install --upgrade, which means that to handle raspberry pi specific updates we would have to include all this logic inside of the setup.py 17:15:45 however setup.py is also used by others that don't run it within the context of lepidopter, so it would mean maintaining as part of the stock ooniprobe source tree a bunch of logic that is raspberry pi/lepidopter specific 17:15:48 what other changes do you want to be able to make to the 1.x image via updates? 17:16:04 it would be better to have this decoupled and part of something else that is only executed on the raspberry pi's 17:16:18 hellais: yes, makes a lot of sense! 17:16:27 sbs: as far as I see, the only reason to avoid that is (unlikely) FS corruption if power is lost during the update && upgrade. 17:16:42 willscott: there are various changes that need to happen that are lepidopter specific. I made a partial list of them here: https://github.com/TheTorProject/ooni-probe/issues/593 17:16:47 it sounds like 2 upgrade paths for the 1.x and 2.x images is duplicating effort to support something that seemed pretty clearly advertised as an alpha / "this may break in the future" status 17:17:00 cool 17:18:11 darkk: right, so there is a way of running the update (like having two partitions) that guarantees that a aborted-while-in-progress update can be restarted, correct? 17:19:03 sbs: correct, but migration to alike schema is to complex to be done right now, so hellais suggests to postpone it. Makes perfect sense as old PIs will eventually die due to wearout :) 17:19:04 willscott: yeah that is true, it was clear that it was an alpha and future versions may break, however a lot of partners have gone through a lot of effort to deploy these probes in very risky countries and it will be very hard for us to be able to reach them with a new SD card to do the update 17:19:06 willscott: true but apparently it's seems to be a big issue for people swapping SD cards 17:19:42 sbs: Yes a dual-copy partition method such as SWUpdate 17:20:01 willscott: I can give you more details on this off the record, but let's just say that some of them have gone through pretty extreme length to get them into certain countries and these are countries where you can't exactly rely on post to send an SD card to 17:20:16 also the people that have the pi's are not technically savy enough to be able to burn it themselves 17:20:26 makes sense 17:20:43 sbs: another way is via partition overlays 17:20:46 hellais & willscot: and in addition to this, most of the probe hosts in these countries aren't "technical" enough to make changes as needed 17:21:06 sbs: re: https://github.com/TheTorProject/lepidopter/pull/69 17:21:07 darkk anadahz: thanks 17:21:12 i guess the only other thing to think about is if the number of deployed pi's is such that it is less work to develop software for upgrade, or to manually do upgrades via ssh on each pi 17:22:08 hellais: so, how will the update script be rolled out in practice from the context of setup.py running on lepidopter? 17:22:27 willscott: some of these pis don't have ssh access, so that is also not something we can easily do on all of them 17:22:30 willscott: there are a number of pis out there that we probably won't be able to do upgrades to via ssh 17:23:09 hellais: but we can enable it via updater :) 17:23:22 but how many, are those the ones you can't get an updated sd card to? 17:23:45 (i'm not opposed to the plan put forth, just asking questions to see if there's a way to spend less developer time on this) 17:24:20 hellais: how many did you count last time? 17:24:21 btw, have we ever seen a lepidopter-PI with damaged FS so far? 17:24:23 would having ssh access (and so potentially unfettered power over the boxes) changes the terms of our agreemtns with partners and/or cause other legal issues? 17:24:30 darkk: yes mine 17:24:45 located in DE 17:25:22 sbs: nothing in regards to ssh access is specified in MoUs, though some partners have been open to this 17:25:39 sbs: good question, so basically the plan is the following. We cut a new release 1.6.1.1 and publish it to pypi. This new release runs as part of the setup.py a procedure that will 1. Remove the auto-update cronjob 2. Install updater.py and the public.asc key inside of the correct locations 3. Setup systemd to run this with an interval of 6h. 17:25:45 darkk: it died after ~6 months 17:26:02 anadahz: how many what? 17:26:23 hellais: you'd rather swap (2) and (1) 17:26:40 hellais: how many Pis are out there 17:26:47 does lepidopter pull from the mainstream pip package repository 17:26:48 ? 17:27:06 darkk: sure, I mean this should all be done as a transaction, so if any of the steps fails it should revert back to the initial state 17:27:22 anadahz: 20 17:27:46 hellais: uhm, and of course these procedure is conditioned to the presence of lepidopter, right? 17:28:02 anadahz: as agrabeli we have given out 20, but IRC there are only about 10-15 actively submitting measurements. 17:28:14 sbs: yes of course. 17:28:57 anadahz & willscott: there are 22 pis out there currently (sorry, miscounted earlier) 17:29:12 hellais: I guess the minimal 1.6.1.1 can just do step 2 and do all the rest using updater.py, right? 17:29:22 anadahz and willscott: out of all these pis, only 3 of them have tor hidden services 17:29:29 the nice thing of doing this by replacing the setup.py with the updater is that we only have to implement the update once and we can perhaps at some point in the distant future remove this logic from the setup.py script entirely 17:29:48 (we probably want to keep it there also for future versions just to be sure that everybody gets a chance to update) 17:30:04 obviously it needs to also check if the updater is already installed and run only if it isn't 17:30:15 andresazp: yes that has the latest stable release 17:31:01 we'll be shipping about 15 pis over the next month 17:31:46 many of which will end up in countries that we won't have easy access to later on 17:31:53 agrabeli: ack regarding the MoU... if possible I'd avoid us having root access on lepidopters because it increases the scope of what we can do using the probes way beyond the software we deploy using standard channels and this imho could put partners in a more troubling situation if cought, not to mention that say I have access to all lepidopters, I am compromised, and someone uses that access to do nastry thing 17:31:58 s 17:32:17 If it’s Ok and there is time in the meeting I would love to share an overview of our plans for the Venezuela deployment to get feedback from you guys 17:32:46 sbs: I totally agree with you. 17:32:51 sbs: very good point have a look at: https://github.com/TheTorProject/lepidopter/issues/35 17:33:46 andresazp: yeah, we'd love to hear your overview 17:33:47 anadahz: #35 is solvable with PAM 17:33:59 agrabeli: compromised admin's key is harder to mitigate 17:34:12 andresazp: sure do you want to add this topic to the agenda (https://pad.riseup.net/p/ooni-irc-pad) ? 17:34:56 sure 17:35:39 hellais: I'm going to look at the lepidopter-update after the meeting, is there anything specific that you would like to share about the implementation? 17:36:27 sbs: I expect it to be fairly hard to remove entirely root access from the lepidopter image without signficantly impacting our ability to expand the platform in the future. I mean one of the main reasons why we use a dedicated device is so it doesn't have any user data on it. Confidentiality of the local network is a concern, but I don't think it's eliminated by disabling root. 17:37:35 hellais: I am not advocating against having a root user, I am advocating against us having ssh access as root 17:37:50 sbs: agreed 17:37:51 sbs: what's the difference? 17:38:12 anadahz: I don't have anything specific to add. It's fairly simple how it works. The jist of it is 1. Check for certain github tag (called latest) that has in it's resources a file called "version" that contains the latest version number (they are ever increasing ints) 2. Download every update file from $current_version to $latest_version and on each download verify if it's signed 3. Each version execute the 17:38:18 python update script 17:38:34 that repo includes the update agent and the scripts for maintainers to manage the update service 17:38:57 darkk: that we do not have the power to login to a specific raspberry and launch arbitrary commands, but we must roll out updates using our infrastructure -- which is more open and scrutinizable by third parties 17:39:51 a gotcha is that I make the assumption the update script is indempotent so to shift the complexity into the update scripts themselves that are easier to update and the agent can stay the same in the long run 17:41:18 anyways this is what I had to say here 17:41:32 for me we can move onto the next topic if nobody has anything to add 17:41:52 darkk: to further clarify, I think we should not have assh access, because I think we should not be able to run arbitrary commands on the probe in an unaccountable way, and I think this is also a safeguard for partners (one thing is if you can demonstrate what software was running, another if one can argue a partner gave a box to "foreign agents") 17:42:28 sbs: +1 17:42:31 hellais: I'll take some time to review the script more carefully and do some reasoning on its idempotence 17:43:21 sbs: great thanks! 17:43:22 in any case, I don't think we should have ssh access into partners' pis (for the reasons mentioned by sbs), though I think it is important that we are able to somehow troubleshoot remotely and that scripts get updated automatically 17:43:23 hellais: we can talk about this offline 17:43:31 #topic Set release date for the 2.x series 17:43:31 sbs: I understand the point, but I'm still unsure if I accept it from engineering point of view (say, running update on 5 PIs over ssh may be MUCH easier then writing proper updater script in advance) 17:44:02 * darkk has to think more about good way to have troubleshooting access 17:45:03 sbs: I am also very conserned about having SSH access but it's nearly impossible to acheive this when you release only one lepidopter release 17:45:44 re: ssh access, we could potentially have a feature exposed from the web UI that allows the user to enable and disable ssh access on demand 17:45:50 darkk: I see your point and have similar feeling, but I guess here we need a way to strike a balance -- a possible solution could be to allow selected partners to give us ssh access in specific cases if they chose to do so (say we really don't known how to proceed and we ask one guy to enable ssh on his lepidopter - but that should not be the default) 17:46:24 if we want to be extra careful we could even have this happen via a special account where every command executed is logged and written to an auditable log 17:46:35 similarly to how teamviewer works 17:46:58 yeah GUI is an option 17:47:07 the probe operator when they request support would go to some admin interface of the GUI and click enable remote SSH access and they get the hidden service address 17:47:14 hellais: yep (even though, in theory, once we have root-like access -- which we would probably need to have? -- it's game over because we can subvert everything) 17:47:17 and share it with us together with some secret 17:47:21 sbs: asking people to enable SSH access though it's very hard! 17:47:27 wouldn't we want all future images to be updated by default? 17:47:28 hellais: that's probably not a good idea. Too much work, too little trust to the log. 17:47:38 anadahz: as hellais is saying, we can figure out a way to do that for them 17:48:08 anadahz: I mean, a user friendly way to allow them to give us access to the SSH of the probe 17:48:44 sure, it's easier to just ssh in to them, but that's something that shouldn't scale imho 17:48:49 yeah but then we need to check if this allign with the development tasks that we also need to implement.. 17:49:05 agrabeli: yes, my understanding is that we want auto-updates but I think we want that using a specific procedure not remote ssh access 17:49:30 it's true that we could potentially subvert it, I can think of some ways in which we could make this harder for us to do (like making all commands go through some sort of proxy that uploads them to an append only log), though this is quite some over-engineering and probably for a marginal benefit 17:49:33 we might be able to ssh into some of the partner pis with their consent, but do we really want users in general and across time to have this option? 17:50:01 I think in the end if the user is OK with giving us this sort of power and it's something to be used sparingly only in emergency situations it's probably ok 17:50:28 hellais: I agree with the marginal benefit part, here I was just trying to explore all the facts of the problem and not suggesting to do something not so simpler 17:50:28 * simple 17:50:37 hellais: +1 17:50:42 hellais: yeah, but only in rare occassions with the consent of the partner, and not as something provided as an option in the GUI 17:51:09 agrabeli: I guess this could be an advanced option that we can request partners to enable in specific cases 17:51:10 I'd say having some sort of hardware toggle is a good option. E.g. reading authorized_keys from USB stick :) 17:51:41 darkk: that's brilliant 17:51:50 it's visual, it's obvious, it's trivial to revoke 17:51:53 and I guess we take into consideration that all people that use lepidopter are partners? 17:52:39 anadahz: well, if somebody else installs lepidopter, I am not sure we would even want to consider having ssh access on their probe 17:52:42 which is actually not really the case 17:52:51 anadahz: do we? 17:53:06 sbs: sure but how can we do this with one release? 17:53:24 sbs: I have raised similar concerns over time.. 17:53:47 and it's not that easy to maintain 2 releases, or maybe is it? 17:54:26 anadahz: I think I am missing some bits, so I do not fully understand why now you are talking about two releases 17:55:06 if ssh requires a signed key the partner has –or rtaher not have if he just dowload the image. Is that is still a problem? 17:55:56 sbs: if we could have a partner-only lepidopter release we wouldn't have to discuss about adding SSH or not now, if partners were OK with this. 17:56:53 andresazp: yes, I think the difference is just that we give partners also the usb key and they choose, other people do not have the usb key 17:58:11 anadahz: uhm, what about lepidopter is always lepidopter and partners can use a usb key to give us ssh access if needed? 17:58:18 anadahz: I'm not sure most partners are comfortable with ssh (or even know what that is), though in general, as mentioned, I don't think we should be ssh-ing into people's boxes (whether they are partners or not) 17:59:43 sbs: as darkk suggested? 18:00:23 well whatever this option is it's going to have to be something that is to be enabled by performing some sort of action by the user, so I don't see how it's a problem to have it be a partner vs a non partner image 18:00:58 sbs & darkk: I think that's a great idea, given that we do that only in limited cases, and with the consent of partners. it sounds like a better idea than including ssh access as an advanced option of the GUI, that anyone could enable (without fully understanding what they're enabling). 18:01:10 agrabeli: yep 18:01:13 that is if they are a partner they will perform the action (insert the USB stick, click on something in the GUI, jiggle the power cord twice and do a handstand, etc.) if they are not a partner they will not perform this action 18:01:14 well, let me introduce 'degradation levels' for the lePIdopter: 1) it works ok 2) it may be repaired with ssh 3) SD card has to be replaced (bootloader & both root images are damaged) 4) whole PI has to be replaced (PSU failure, for example) 18:01:16 sbs: increased complication as not all partners have physical access to Pis 18:01:21 the question is -- do we need step (2) 18:01:45 may we may move to (3) any time we need (2) ? 18:01:52 if we want to "enforce" that this action is not possible by non-partner probes, this could just be a matter of adding something to the setup wizard where the user declares if they are a partner or not 18:02:20 It seems that the lepidopter requirements are changing with the times and per discussion :P 18:03:30 anadahz: I see, well, we should probably find the simplest solution that avoids exposing partners too much 18:03:58 darkk: case 2) is probably a failure in the updater right? 18:05:00 as willscott mentioned lepidopter alpha mentioned to be an alpha. But it seems that we have deployed a bunch of Pis to people that cannot really replace the SD cards and now we find the most optimal ways to work around this. 18:05:12 s/mentioned/meant 18:05:23 sbs: I can imagine several cases. Actually, it's `any case that needs human intelligence to solve`. It's something unplanned & unpredictable going wrong with software. 18:05:59 sbs: anything that can't be solved by reboot and auto-wipe 18:07:00 anadahz: yeah, I guess the problem comes down to how we roadmapped our deliverables. We probably should have only have started deploying probes once lepidopter was stable, and including all these features....but then again, if we had waited for that, we wouldn't have the measurements and country reports on time....ah well. :) 18:07:36 per my limited experience, we would’ve needed to change at least 4 SD cards in a few months of operation if we didn’t had SSH cause of an ufoseen problem 18:07:39 agrabeli: True! 18:08:34 darkk: okay, right... I think I know too little about managing raspberry pis to estimate the likelyhood of this kind of problems, I trust the experience of anadahz and yourself 18:08:37 agrabeli: I'm describing the issue to the rest of the people to understand how we came to this solution 18:08:46 andresazp: do you mean bricking lePIdopter via ssh unexpectedly? 18:10:03 We didn't use lepidoper, but my point was in support of the option for ssh access for unforseen problems if hte user allows 18:11:05 In our case, the problem we had would have not been fixed with running pip update 18:11:58 andresazp: are the cases documented somewhere? The information regarding real failure scenario may be useful. 18:12:57 I'd love to see them documented in ooni-operators@ mailing list :-) 18:13:13 since we're running out of time, should we proceed to andresazp's topic (and cover topic 2 last)? 18:13:39 darkk: +1 18:14:00 agrabeli: yep! 18:14:13 @darkk I will try to write there a bit, some machines we expect had FS or Hw problems are being collected but we still dont have them 18:14:36 First of all, were you able to include our previous VE reports from to your DB? 18:14:59 sounds good 18:15:31 #topic Update from Venezuela 18:16:26 andresazp: not yet, I have a copy of the big tarbal you gave me, but we have been holding back on integrating them to getting the pipeline into a more stable state (bulk loading of many measurements with the current system seems to not make the pipeline happy) and we also have some space issues 18:16:27 I shared the link to the compressed reports on an old meeting. It was on someone’s etherpad grain on a self-hosted sandstorm inscance 18:16:42 ok 18:17:34 So our plans for VE are more or less the following 18:19:07 Deploy at least 4x the Rpi as in the pilot – we want to use ooni 2.x but is not guaranteed as of yet 18:20:16 in 12 or more cities 18:21:15 it’s possible that we might use lepidopter, but i’m not sure 18:21:17 we will run our own backend, but would love help configuring it so that it report 18:21:30 so that it reports automatically to your pipeline 18:22:06 (asuming those parts still work as I understtod them) 18:23:07 we would run another server in python/django that would help us visuallize the collected data by our probles 18:23:47 taking in part the role of the ooni pipeline, the official pipeline should get all reports nontheless 18:26:14 andresazp: this is great! 18:26:17 a couple of questions: 18:26:28 It’s just smaller solution easer to get it going and intagre with other things. Here we would report “incedets” that cohesibly and contextually report an specific incent (not a measuement) 18:26:28 1) What is the size of the deployment you are looking at? 18:26:29 this would be for internal consuption but the code will be open 18:26:47 2) What is the timeline for the project (i.e. when are you going to begin collecting measurements) 18:26:50 60 to 70 probles 18:27:01 probes* 18:27:24 andresazp: 60-70 probes in VE? 18:27:25 finally we would setup a website to check on all of the incedets 18:28:49 andresazp: this all sounds fantastic! 18:28:55 an incident, conceptually, may reflect information form many measuemets/reports/probes – even difernets tests ortargets as long they are related, and put in context 18:30:21 andresazp: that is a very interesting concept. I wonder if the concept at least could then be extended also to the explorer 18:30:29 simillarly as you weould say that police was vviolently disperssing a peaceful protest, and include a list of injuried people rateher than repeat X person was beated in LOCATION during the protest on DATE 18:30:38 wouldn’t 18:30:54 would* 18:30:57 andresazp: these incidents are recorded by having trusted people review the measurements or are you also thinking of crowdsourcing the analysis? 18:32:02 someone with privilges to our server would post the indents or updtae them 18:32:36 after reviewing measurements 18:33:14 that server could get really complicated really fast, so we are keeping things simple for the being 18:33:42 We are working on developing a few extra tests, also 18:34:50 speed test, despite being a slow system, we believe we can saturate the connection, most common speeds are under 1mbit in ideal conditions 18:35:03 locally 18:35:36 andresazp: you should look into measurement-kit for running speed tests. It has a pretty good implementation of NDT. 18:35:46 and it also supports submitting the results to an ooni collector 18:35:53 I see 18:36:27 we have a simple implementarion that ran outside of ooni on the pilot but thought of integarting it 18:36:42 andresazp: do you have an estimate of when the first measurements are to be coming in? 18:38:36 we wwnat to do something like the whatsapp conectivity, and VPN block tests for tunnel bear (site previusly blocked), Zello (a push to talk radio app that has been blocked) and hotspotshield 18:39:15 andresazp: what system are you using for running measurements? 18:39:47 the pilot had rpi 2, rpi 2+ and ocacionally a b 18:39:59 all the new ones are 3b 18:40:02 andresazp: if you haven't seen it you may be interested in this whatsapp test I wrote some time ago: https://github.com/TheTorProject/ooni-probe/blob/feature/whatsapp-test/ooni/nettests/blocking/whatsapp.py 18:40:40 it's not really thorough, that is it doesn't actually speak the whatsapp protocol, but it does to a general connectivity test towards all the whatsapp endpoints used in the mobile app and web app 18:40:54 andresazp: I was already working to make debian packages for measurement-kit 18:41:35 andresazp: and apart from that, it you're interested I can help you to add measurement-kit for running NDT in your use case 18:42:01 That’s what we see, and pretty much would replaicate the approach unless service wants to partner help us make it a bit more sofisticated 18:42:26 SBS: That would be great 18:43:12 possibly this would be required for zello IIRC 18:43:46 andresazp: Since ooniprobe v2 is going to be the new stable release it will be really helpful if you could test ooniprobe 2.0.0 that will be shipped by default in lepidopter beta --planned to be released by the end of this week. 18:44:11 anadahz: Carlos is tesing it 18:44:25 nice° 18:45:05 There are some features that we need on v2 though, and I’m not sure if we might be able to implement or re-implement 18:45:10 things form v1 18:45:32 andresazp: great! Reports on any bugs or feature requests would be super useful. 18:45:44 andresazp: are there features that were present in v1 that you are missing in v2? 18:46:12 comments in some form. that we could use a dict for diferent values 18:46:26 or maybe just a sting 18:46:31 string* 18:46:37 that we could parse 18:46:46 either in the ooni settings 18:47:24 or as a flag or option when running a deck. I understand that running tests changed a lot on v2 18:48:03 andresazp: you can do this in the new deck specification as well, by means of the annotations. 18:48:07 andresazp: to better understand, you are talking about the deck format, correct? 18:48:38 you can specify these both as global (for the whole deck) or task specific 18:51:08 for example like this: https://gist.github.com/hellais/d4fce0a27ee18105e990e955d3de1df7 18:51:38 I havent been able to test v2 myself much, but I thought taht feature wans’t ready 18:52:21 I see, it 18:53:08 it should be working and I have done some testing of annotations as well. Though the best thing is if you report any type of difficulty you encouter with v2 or even if you need some specific feature or adjustment to be implement either as github ticket or even just by sending me an email 18:53:09 I believe Carlos tried to test it 18:53:30 but we will certainly double test 18:53:37 For a bit of conext: 18:54:10 On the VE pilot the annotations includuded hardcoded information like probe ID, ISP, city and wather te test was run by cron or manually by one of us. 18:54:11 The commands concatenated the request to run a certaing test, list with the contests of a file that ID’d the probe 18:56:15 contents* , sorry the lack of spellchek is killing me 18:57:29 In this we might be more able to help, is what we discussed about adding a flag for web_conectivity so that it doesn’t necesairly run HTTP_request 18:57:49 for bandwith reasosns 18:58:16 I’m all ears for any more feedback and also critisims 18:58:32 yes I understand. I guess you could achieve the adding of this metadata by generating the decks on a per probe basis. 18:58:36 criticism of how we plan to do it 19:00:07 andresazp: I fear that not running the http request part of the web_connectivity could lead you into not being able to identify various cases of censorship that don't rely on DNS based blocking. The bandwidth consumption of the web_connectivity test is much less than http_requests, but it's still much more than just doing the DNS resolutions obviously. 19:01:20 I wonder how much mileage you would get from not running the http request only when the DNS doesn't match. 19:01:46 we would run the complete web_conectivity, including the http_request part, but not as freceuntly, probbaly just at night 19:02:13 ah ok got it 19:03:00 DNS and TCP would ID virually all if not every case of internet block or censorship ever implemented in VE 19:03:55 understood. I do see some benefit of implementing this in the web_connectivity 19:04:03 it shouldn't be too hard to do either 19:04:06 DNS is by far the most common, 19:04:06 Blocked IPs was somewhatr common, but currently not used 19:05:40 also it seems like it would be useful to have per-test scheduling information specified in the deck descriptor 19:05:51 You will probably find simmliar needs in countries with less sophisticated censorhsip programs and low conection speeds 19:06:05 yes that makes sense 19:06:55 For our pilot we had a pretty specific sheduling for diferents tests and even diferent lists 19:07:37 andresazp: it would be cool if you could share some information about this on some ticket on github so we can see to add it as a future feature. 19:07:39 so a high priority list would run more frecuently in order to have early data on critical election-related sites beign bloched 19:08:40 ok 19:08:46 thanks for sharing with us your progress on this and please to keep us updated on how it moves forward and if there is anything we can do to help you! 19:09:50 Our release plan is: 19:10:01 early version of our server by Dec-Jan 19:10:25 early version of public site by Feb 19:11:55 Fiest probles on the street and new ooni-backend instance properly configured by November 19:12:05 or october 19:15:35 November or October of this year? 19:15:40 that is the server and public site will happen after the probes are deployed? 19:15:41 The first thing we might need help is in configuring a ooni v2 compatible ooni-backend 19:15:42 For the pilot we bearly got it set up 19:16:11 sure we can help with that. The v2 probes however are backward compatible with older backends as well 19:16:29 we might have a priblem in our old server, then 19:16:44 ok, we can speak more about this out of band 19:16:49 ok 19:17:02 if there is nothing else to add I would say we can end this with a "slight bit of delay" :P 19:17:05 the public server is public site that shows info on ïncedents” 19:17:14 .. sorry 19:18:18 * graphiclunarkid waves - lurking in the meeting but has nothing to contribute this time. 19:19:01 cool 19:19:14 in that case thank you all for attending and have a good week! 19:19:16 #endmeeting