16:04:00 <cohosh> #startmeeting anti-censorship team
16:04:00 <MeetBot> Meeting started Thu Jan 22 16:04:00 2026 UTC.  The chair is cohosh. Information about MeetBot at https://wiki.debian.org/MeetBot.
16:04:00 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
16:04:39 <cohosh> hi!
16:04:52 <Shelikhoo[mds]> hi~
16:05:02 <cohosh> welcome to the anti-censorship team meeting
16:05:20 <Shelikhoo[mds]> here is our meeting pad: https://pad.riseup.net/p/r.9574e996bb9c0266213d38b91b56c469
16:05:20 <Shelikhoo[mds]> editable link available on request
16:06:48 <cohosh> i can start with an announcement
16:07:23 <cohosh> i wrote up a proposal for a change to proxy poll rates: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40507
16:08:04 <cohosh> the goal is to increase fairness in proxy distribution (though a proxy's capacity will still apply)
16:08:20 <cohosh> please feel free to take a look and comment
16:09:04 <dcf1> so the idea is to implement a rate limit within the broker itself, the rate limit being keyed on proxy IP address
16:09:04 <cohosh> i'm working with some others on simulations to see whether this will advantage censors or clients more in the event of an enumeration attack
16:09:20 <cohosh> yes, a dynamic rate limit
16:10:36 <dcf1> which could look like a map[net.IP]time.Time, a "no-sooner-than" timestamp for each IP address
16:11:31 <cohosh> yeah, that's what i was thinking
16:11:49 <cohosh> and i think we should either evict entries or wipe the map every 24 hours
16:11:54 <dcf1> I suppose entries in the hash table will have to expire eventually, I suppose (as long as we don't need longer history for an IP address) we could expire an entry as soon as its no-sooner-than timestamp is hit, since that IP is free to query again after that point
16:11:59 <cohosh> to avoid it growing too big
16:12:35 <cohosh> oh yeah, just use the next-poll timestamp as the expiry is a good idea
16:12:44 <Shelikhoo[mds]> yes!
16:13:01 <Shelikhoo[mds]> did we think about how to deal with proxies sharing the same ip address
16:13:24 <Shelikhoo[mds]> should we distribute the quota among them
16:13:44 <Shelikhoo[mds]> or find a way to get them synced in someway
16:14:12 <cohosh> my feeling is to just let the rate limit apply, or have a practical rate limit that is a small multiple of the one we advertize
16:14:50 <dcf1> or it could even be like a priority queue, just check for an IP's presence in the queue to see if it should be allowed yet, and pop entries off the front of the queue by their date. It's especially easy to implement such a queue if the rate limit is the same for all proxies, since all you ever do is push to the back and pop from the front
16:15:08 <Shelikhoo[mds]> yeah, my fear is if there is 3 proxy running at the same time
16:15:30 <Shelikhoo[mds]> since they don't know each other, the next poll time will only work for one of them
16:15:34 <dcf1> the turbotunnel ClientMap does that kind of priority queue expiry thing https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/7afd18e57fbb874a53a9452e9b84ac3f9e1b1e3a/common/turbotunnel/clientmap.go
16:15:49 <cohosh> dcf1: hmm, that's an interesting idea, and then if we *really* need the proxies we can use them even if they shouldn't be polling yet
16:17:04 <Shelikhoo[mds]> as a result with n proxies, they will send (n-1) discarded poll for every n pull they send
16:17:04 <cohosh> ok i think i like that idea
16:17:50 <dcf1> you could have honest proxies add a short random delay to their poll interval (like add up to 20%), that would ensure one proxy on an IP address doesn't consistently starve out other proxies on that IP address. you would want to do that anyway, to smooth out resonance effects from e.g. sudden restarts
16:18:17 <cohosh> that's a good idea
16:18:49 <Shelikhoo[mds]> oh yes, I think this priority queue expiry thing would work... I think it should work with a small timeout
16:19:08 <Shelikhoo[mds]> as the proxy also have webrtc's timeout
16:19:33 <Shelikhoo[mds]> so the request from client cannot wait in queue for extended period of time
16:19:54 <cohosh> if multiple proxies have the same IP, they will be distributed much less, but i don't think that's a problem. i think it's a realistic scenario: multiple proxies on the same IP are good if we experience too much load but don't improve censorship resistance
16:20:02 <Shelikhoo[mds]> so the request from proxy cannot wait in queue for extended period of time
16:20:52 <Shelikhoo[mds]> yes... I agree we can distribute the proxies running at same ip less...
16:21:18 <cohosh> oh i think i misunderstoon what is the priority queue
16:21:43 <dcf1> cohosh see clientMapInner https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/common/turbotunnel/clientmap.go?ref_type=heads#L65
16:22:11 <Shelikhoo[mds]> it is not a bad thing, so long as we make them receive request from client in a "semi-fair" way
16:22:23 <dcf1> it has byAge (array) and byAddr (hash table) for quick indexing either by expiration date or by clientid (which in the rate limit case would be an IP address instead)
16:22:30 <cohosh> yeah, i was imagining that the pool of available proxies is a priority queue based on time to or since next allowed poll
16:22:57 <cohosh> but this is just for the map of last-seen timestamps
16:23:07 <cohosh> although i guess we could do both of those things
16:23:37 <dcf1> clientMapInner.removeExpired just pops entries off the front of the queue as long as they are in the past, because the array is a container/heap that keeps them sorted that way. It's how you avoid doing a full traversal of the map all the time.
16:23:53 <cohosh> yeah that is a really nice way to implement this
16:24:29 <dcf1> sorry, just some brainstorming, I don't want to get out too far ahead of requirements gathering
16:24:53 <cohosh> this is good, i've started doing some implementation and i like the ideas here
16:24:56 <cohosh> thanks!
16:24:58 <dcf1> do you think that's all we need? a single "no-sooner-than" timestamp per IP address, a uniform rate for all proxies?
16:25:18 <cohosh> for the rate limiting part, yes
16:25:41 <cohosh> for setting what the rate limit should be, i have some ideas on calculating it based on client poll rates or pool size
16:25:52 <cohosh> but i haven't tried those out yet
16:25:56 <Shelikhoo[mds]> I think this should work for now...
16:26:10 <dcf1> ooh I suspect there must be some theorem from queuing theory that would inform such a policy
16:26:24 <cohosh> nice haha
16:26:57 * cohosh rubs hands together at the thought of reading and applying some cool theory
16:27:08 <dcf1> cause that sort of thing is like "if you have people calling the support telephone line Poisson distributed at rate lambda, how many agents do you need in your call center"
16:27:27 <cohosh> totally
16:28:20 <dcf1> I'll update the ticket with my immediate impressions and ideas
16:28:26 <cohosh> thanks!
16:28:37 <Shelikhoo[mds]> "0, they will be handled with AI that couldn't understand or do anything"
16:28:52 <cohosh> i think that's it for now, this will take a while to implement and assess
16:29:03 <cohosh> i'll bring it up again later when we have some simulation results :)
16:29:07 <cohosh> Shelikhoo[mds]: lol
16:31:42 <cohosh> ok next thing on the agenda
16:31:49 <cohosh> i think it's from you Shelikhoo[mds]
16:32:04 <Shelikhoo[mds]> this topic is from me
16:32:04 <Shelikhoo[mds]> basiclly I have received a direct report from user that their network environment will block/restrict tls 1.2 traffic,
16:32:04 <Shelikhoo[mds]> and when using randomlized fingerprint with utls, its connection will sometime get blocked because of the usage of tls1.2
16:32:04 <Shelikhoo[mds]> I was thinking if we wants to fix this issue
16:32:15 <Shelikhoo[mds]> Maybe support pinning tls1.3 when using uTLS random fingerprint?
16:32:15 <Shelikhoo[mds]> https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/171
16:33:03 <cohosh> oh that's interesting
16:33:11 <Shelikhoo[mds]> utls's random fingerprint setting has a probability 40% to set max tls version to 1.2
16:33:41 <Shelikhoo[mds]> we could try to find a way to get around this issue
16:33:48 <theodorsm> I don't know if the tls Config MinVersion is respected in utls randomization
16:34:06 <theodorsm> https://github.com/refraction-networking/utls/blob/8fe0b08e9a0e7e2d08b268f451f2c79962e6acd0/common.go#L811
16:34:07 <Shelikhoo[mds]> Shelikhoo[mds]: sorry it was 60%
16:35:54 <Shelikhoo[mds]> theodorsm: The function call signature is
16:35:56 <Shelikhoo[mds]> func generateRandomizedSpec(... (full message at <https://matrix.debian.social/ircbridge/media/v1/media/download/AY_JGlFNJOAz__ouC2zfQo0QH1Z98dQwC46V8D_3-SODcP4FZdzSIh9qwS1MGuYTic7Rmg7FdyKF_C0h5Uy6zJFCecCpyISgAG1hdHJpeC5kZWJpYW4uc29jaWFsL0RQWHFpd3lpb0piSmlFWUZKTENJSE1ldA>)
16:36:29 <Shelikhoo[mds]> so I think it is very likely max or min version setting is not respected
16:38:17 <cohosh> it sounds like the easiest solution is to suggest a different fingerprint to use in this case
16:38:54 <cohosh> but i haven't been paying close attention to which fingerprints work well
16:39:11 <cohosh> have we been using random because it's difficult to keep the other ones up to date?
16:39:42 <Shelikhoo[mds]> yeah, I think other fingerprints kind of get outdated really quickly
16:40:54 <Shelikhoo[mds]> and many of these contributed fingerprint are problematic when get first introduced
16:41:13 <Shelikhoo[mds]> like requires fix from time to time
16:42:05 <cohosh> ok so at least having an option to pin tls1.3 with a random fingerprint would be useful
16:42:44 <Shelikhoo[mds]> yeah, if we wants to do that I can have a look into way to implement such a thing
16:43:28 <Shelikhoo[mds]> the primary reason I think we wants to discuss this is that the bridgeline space situation is getting worse and worse
16:43:52 <cohosh> that is true
16:44:07 <Shelikhoo[mds]> I wonder if I should implement this as a regular bridge line option
16:44:19 <Shelikhoo[mds]> or we need something else
16:44:28 <theodorsm> Seems like we could create a custom ClientHelloSpec @Shelikhoo: https://github.com/refraction-networking/utls/blob/8fe0b08e9a0e7e2d08b268f451f2c79962e6acd0/u_parrots.go#L2767
16:46:17 <Shelikhoo[mds]> theodorsm: Yes! If we wants to get chrome’s fingerprint supported in this way, we might need to actually implement some of its tls extensions
16:46:46 <Shelikhoo[mds]> which requires a lot of engineering effort
16:47:04 <theodorsm> I can assist if needed, but do we want to support it?
16:47:51 <theodorsm> We could perhaps upstream a change to utls to control min version too
16:48:41 <theodorsm> Also, would be interesting to hear more about the report. Where did TLS 1.2 get blocked?
16:49:18 <Shelikhoo[mds]> If I recall correctly it was somewhere in Russia
16:50:07 <Shelikhoo[mds]> I think the question being discussed is what is the best approach moving forward about this issue
16:50:34 <cohosh> i don't have a strong opinion about this, but i would try to find a different working fingerprint in the meantime and then see how much work it would be to implement a tls1.3 only random fingerprint
16:51:44 <cohosh> we can make space in the bridge line and we can ship some backup stun servers in the ClientPluginTransport line too if needed
16:52:00 <Shelikhoo[mds]> Yes.. I can have a estimate the work required first
16:52:16 <Shelikhoo[mds]> Yes.. I can have a estimate of the work required first
16:52:41 <Shelikhoo[mds]> And don’t think too much about the bridge line space issue for now
16:53:07 <cohosh> *ClientTransportPlugin line, heh
16:54:23 <Shelikhoo[mds]> I think we can discuss this again when I have better idea about the work required to implement the “always tls1.3” work
16:54:28 <cohosh> ok sounds good
16:54:34 <Shelikhoo[mds]> Nice!
16:54:36 <cohosh> thanks Shelikhoo[mds]
16:55:08 <cohosh> there's one more quick thing from me which is that i want to make breaking changes to some of the APIs in our snowflake packages, which would require a major version bump
16:55:24 <cohosh> i know we've discussed this before, so i don't necessarily want a long discussion
16:55:48 <cohosh> but we've acquired some technical debt in, for example, the messages package
16:56:16 <cohosh> https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/7afd18e57fbb874a53a9452e9b84ac3f9e1b1e3a/common/messages/proxy.go
16:56:35 <cohosh> and i just want to rewrite this
16:57:23 <Shelikhoo[mds]> I think we can have an issue to discuss this issue asynchronously.
16:57:32 <cohosh> sure
16:57:39 <Shelikhoo[mds]> We might wants to bundle all the breaking changes in one go
16:57:44 <dcf1> Oh you mean the "doSomething" and "doSomethingWithThisOtherThing" pattern? Normalizing that to something more regular?
16:57:57 <cohosh> dcf1: yes
16:58:09 <cohosh> and to make it more like the client.go file in that directory
16:58:22 <cohosh> so we're not invidually returning all the fields in the Decode functions
16:58:42 <cohosh> we can return the struct instead
16:58:55 <dcf1> oh, offhand I don't see a reason to break compatibility with those APIs, even if exposing a new more regular API
16:58:57 <cohosh> which is why we needed that pattern in the first place
16:59:23 <dcf1> but I think it's not a big deal, if you think it doesn't cause too much pain for our downstreams it's ok
16:59:23 <cohosh> dcf1: ah so you're saying just add new functions and keep the old ones?
16:59:33 <cohosh> ok
16:59:35 <dcf1> yeah
16:59:47 <dcf1> write the old ones in terms of the new ones
17:00:01 <cohosh> yeah that sounds fine too
17:00:10 <Shelikhoo[mds]> I think most of down streams are not using these functions directly
17:00:17 <dcf1> but I haven't looked at this specifically to know if that's reasonable, use your own judgement I'd say
17:00:24 <Shelikhoo[mds]> In theory we could search all public repo for references
17:00:28 <cohosh> i'll comment some of these other functions as deprecated maybe, and then if/when we do a major version bump we can remove them if we want to clean up the code
17:00:47 <cohosh> and it'll be easy to find the ones we're not using anymore
17:00:53 <Shelikhoo[mds]> cohosh: Nice!
17:01:14 <cohosh> that's it from me, anything else for today?
17:01:31 <Shelikhoo[mds]> We also have these kind of do…with…and in other parts of codebase
17:01:52 <Shelikhoo[mds]> It will be great if we can get rid of them as well, but maybe later
17:02:00 <Shelikhoo[mds]> Eof from me
17:02:02 <Shelikhoo[mds]> Thanks
17:03:10 <cohosh> awesome, thanks everyone!
17:03:12 <cohosh> #endmeeting