20:30:48 <serpent> #startmeeting 20:30:48 <MeetBot> Meeting started Wed Mar 25 20:30:48 2020 UTC. The chair is serpent. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:30:48 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic. 20:31:10 <serpent> Hello everyone! 20:31:27 <serpent> WE have something to discuss. 20:31:44 <serpent> 1. Debian images are on AWS marketplace! 20:32:00 <serpent> 2. GCE agents packaging. 20:32:09 <serpent> 3. Image finder. 20:32:17 <serpent> Anything else? 20:32:23 <noahm> IMDSv2 on AWS 20:32:30 <serpent> agreed 20:32:52 <noahm> proposal for discussion: backports kernel option on AWS 20:33:54 <serpent> So what should we start with? 20:33:56 <marcello^> hello 20:34:05 <serpent> 4. Vargrant 20:34:34 <noahm> serpent: let's start with image finder. I think it'll be quick. 20:34:41 <serpent> #topic Image finder 20:35:11 <serpent> Anything new here? 20:35:16 <noahm> Last month I said I'd start work on pulication of data to the image finder. I have barely touched that, so nothing notable to say on it. 20:35:22 <arthurbdiniz[m]> so guys i'm finally back to Brazil to continue the work in Image Finder 20:35:24 <noahm> I will try to prioritize it this month. 20:35:29 <noahm> arthurbdiniz[m]: oh! nice! 20:35:34 <serpent> Thanks 20:36:04 <arthurbdiniz[m]> I have somethings to talk about 20:36:19 <serpent> So do you thing we should be getting images from Salsa/patterson 20:36:24 <arthurbdiniz[m]> the approach to send images to DB is the first one 20:36:41 <noahm> first one? 20:37:14 <arthurbdiniz[m]> I was thinking about 3 ways to put the images data inside de database 20:37:24 <arthurbdiniz[m]> the* 20:37:39 <arthurbdiniz[m]> - Script to connect directly to the DB to publish image 20:37:39 <arthurbdiniz[m]> - Web crawler daily on cd images 20:37:39 <arthurbdiniz[m]> - Use the app API with basic auth 20:37:58 <arthurbdiniz[m]> * - Script to connect directly to the DB to publish image from Salsa/patterson 20:37:58 <arthurbdiniz[m]> - Web crawler daily on cd images 20:37:58 <arthurbdiniz[m]> - Use the app API with basic auth 20:38:14 <serpent> From our discussions (DebConf and Spring) it looked like that Salsa was primary source 20:38:49 <arthurbdiniz[m]> so the ideia would be to use the pipeline to send data right? 20:38:49 <serpent> So crawling AWS/GCE/Azure wouold be nice for the future, but main source of metadata should be Salsa worflow 20:39:05 <serpent> Exactcly 20:39:31 <noahm> it's probably reasonable for the pipeline to push to the image finder... except you'll need to worry about error handling/retries, etc 20:39:31 <arthurbdiniz[m]> but even using the pipeline we need to have the communication between the pipeline and DB 20:39:37 <arthurbdiniz[m]> how should we do that? 20:40:02 <serpent> You had Merge Request #11 to get JSON document with images' details 20:40:14 <noahm> IMO a simple HTTP POST or PUT endpoint with client certificate auth would be reasonable. 20:40:40 <serpent> CI_JOB_URL was the way for now - or am I mistaken here? 20:41:24 <noahm> arthurbdiniz[m]: rather than design this service here, do you think you could write up your proposal in a bit more detail, maybe in email or on salsa or something? 20:41:46 <arthurbdiniz[m]> sure 20:42:05 <noahm> I think that'll be a better way to use our time, and ensure that we get time to consider all the options and their details. 20:42:21 <serpent> #idea Arthur writes proposal how to merge Salsa pipeline with Image Finder 20:43:28 <arthurbdiniz[m]> another thing about the image finder is the deploy 20:44:10 <noahm> what aspect of it? 20:44:18 <serpent> #info I believe zigo was working on it 20:44:35 <arthurbdiniz[m]> I need help to setup the database and I think it was zigo was talking about HA database 20:45:09 <serpent> So for now you were using PostgreSQL, right? 20:45:11 <waldi> no. please make it simple 20:45:14 <noahm> I wouldn't worry about that initially. 20:45:19 <noahm> +1 to what waldi said 20:45:30 <arthurbdiniz[m]> yes 20:45:32 <waldi> this also means: no funky mysql-fork 20:45:36 <arthurbdiniz[m]> using PostgreSQL 20:45:41 <serpent> And zigo wanted to use MySQL because it has better multi-master? 20:45:41 <noahm> the db doesn't change frequently; we can back it up as needed, and restores are simple. 20:45:51 <arthurbdiniz[m]> I deployed the app with docker-compose just to test 20:45:57 <kuLa> HA is simple but fr MVP no HA setup shoudl suffice IMO 20:46:18 <serpent> I have more experience with postgres, and am not sure that we should worry about high-availability right nnow 20:46:27 <noahm> arthurbdiniz[m]: I have no objection to docker-compose. 20:46:48 <arthurbdiniz[m]> I think I can put one cron job inside the VM to backup the data 20:46:57 <arthurbdiniz[m]> just in case something back happend 20:47:03 <arthurbdiniz[m]> bad* 20:47:25 <noahm> yeah. please go over these details in the doc you're going to write. :) 20:47:53 <arthurbdiniz[m]> so I will redeploy using docker-compose to keep it simple and test all the features develop until now 20:48:09 <arthurbdiniz[m]> ok noahm 20:48:23 <serpent> I guess for now we should worry more about fitting all parts together, not about making them however-may-9s we want to achiever 20:48:47 <noahm> I'd be happy with a single 9 for now. 20:49:06 <noahm> Considering we're at zero at the moment. 20:49:33 * kanashiro waves 20:49:39 <waldi> next, please? 20:49:46 <serpent> #agreed Let's get Image Finder feed with current daily/main images so it has all current data 20:49:56 <zigo> Hi. Reading backlog. 20:50:43 <serpent> #topic Debian on AWS 20:50:51 <serpent> That should be quick. 20:51:25 <serpent> SPI accepted AWS Marketpace agreement and thus Debian images are now on Amazon Marketplace 20:51:42 <noahm> The publication process there is still manual, which is a pain. 20:51:44 <serpent> We are still not on gov-cloud, but IMO it's good stpe 20:51:55 <noahm> So we'll need to log in to the console and update our listing when we publish new images. 20:53:03 <noahm> In theory we can do some of the work programatically, if we want to generate an Excel spreadsheet on salsa 20:53:26 <noahm> Which we could then upload to the marketplace, rather than fill out the details manually in a form. 20:53:31 <noahm> But... yuck. 20:53:55 <marcello^> btw how is publishing done, you run a special job on salsa, putting the credentials over the salsa ui ? 20:54:19 <kuLa> noahm: can it be csv? 20:54:21 <serpent> What is blocking us from making this (semi-)automatic? 20:54:23 <noahm> marcello^: salsa has the creds already; you tell it what version to publish, and watch it go. 20:54:32 <zigo> arthurbdiniz[m]: I need that the finder reads its config from a file (understand: DSN) and not something with hardcoded password, plus the db sync job as a command line. 20:54:55 <noahm> kuLa: no, I don't believe so 20:55:10 <kuLa> bah 20:55:30 <zigo> arthurbdiniz[m]: Since you are using sqla, postgress or mysql you wont care, right ? 20:55:31 <waldi> serpent: what do you mean with "this"? 20:55:32 <noahm> presumably there are tools to generate an Excel doc based on a csv, though. 20:56:03 <serpent> waldi: Publishing images 20:56:05 <waldi> noahm: https://xlsxwriter.readthedocs.io/ 20:56:10 <kuLa> it can be done with a bit of python 20:56:12 <waldi> serpent: well, it is?!? 20:56:28 <noahm> serpent: publishing happens automatically every night. 20:56:37 <serpent> I remember we were dicsussing it, that there should be manual step for offcicial releases 20:56:44 <arthurbdiniz[m]> <zigo "arthurbdiniz: I need that the fi"> we can't use one .env file with docker-compose for that? 20:56:51 <noahm> But the release images require a manual push of a button. Which is as it should be. 20:57:07 <arthurbdiniz[m]> <zigo "arthurbdiniz: Since you are usin"> right 20:57:08 <serpent> OK, then it is as it should be. 20:57:22 <serpent> Sorry for misunderstanding 20:57:43 <noahm> serpent: the issue is that the AWS Marketplace doesn't have an API for publishing/updating listings. 20:57:57 <serpent> As long as daily images are push automatically and release need manual approval, I'm OK 20:57:58 <noahm> It is a web form, or an upload of an Excel spreadsheet. 20:58:15 <zigo> arthurbdiniz[m]: I would like a packaged solution that I can deploy with puppet, so it would be easy to give to DSA. 20:58:38 <waldi> zigo: DSA does not want to run services. you've been told several times now 20:58:42 <serpent> noahm: So it's more that we need to provide data in specific format to get it accepted? 20:59:11 <noahm> Yeah. Marketplace is really independent of AMI generation/publication. It is just a catalog, with its own set of metadata. 20:59:12 <waldi> zigo: so packages are useless and wasted time 20:59:19 <arthurbdiniz[m]> <zigo "arthurbdiniz: I would like a pac"> this packaged solution will delay the app deploy and for now I think what everybody wants is an app running and collecting images 20:59:37 <zigo> waldi: one day, IF it is well made, they may accept. 21:00:15 <zigo> arthurbdiniz[m]: I just need that little bit of hel[ then I can do the rest ! 21:00:44 <serpent> Are we still talking (zigo, arthur, waldi) about image finder? 21:00:55 <zigo> yes 21:00:55 <serpent> We can return to this, if you want to 21:01:05 <serpent> #topic Image finder 21:01:53 <arthurbdiniz[m]> @zigo the docker-compose deploy is ready and was working at the meeting at MIT 21:02:35 <zigo> arthurbdiniz[m]: The wait it's currently made, I wouldn't know how to maintain it. If it is well designed for easy installation and deployment, then I can just press a button and it deploys. That's how I maintain absoluely *all* of my online infrastructure. 21:02:46 <zigo> *the way 21:02:51 <arthurbdiniz[m]> we can leave the docker-compose deploy for now while you do the package stuff 21:03:08 <noahm> zigo: just learn docker, then it will work. no need to rearchitect everything to accomodate your lack of knowledge 21:03:09 <zigo> arthurbdiniz[m]: To have something that works, I need the 2 things I wrote above... 21:03:32 <zigo> 1/ something that reads login / pass to the DB from a config file 21:03:32 <zigo> 2/ a kind of db_sync away from manage.py 21:03:48 <zigo> Oh also, I'd like it to be served from a *real* web server. 21:03:54 <zigo> A wsgi APP... 21:04:11 <zigo> werkzeug / gnunicor / etc. are also no-go for production. 21:04:18 <zigo> It wont scale or handle any load ... 21:04:24 <waldi> no webserver runs wsgi apps 21:04:34 <serpent> zigo: are you now talking about feeding data to Image finer from Salsa? 21:04:45 <zigo> libpache-mod-wsgi is the way to go. 21:04:49 <waldi> no 21:05:01 <waldi> apache is no way to go, less is libpache-mod-wsgi 21:05:02 <serpent> Is it to push data directly to database, or through Image Finder API? 21:05:11 <kuLa> no it's about deployment and maintenance 21:05:24 <arthurbdiniz[m]> 1) the docker-compose injects the env variables to the app, its not hardcoded 21:05:37 <zigo> serpent: Currently, the web app is using a web server in Python. That doesn't really work in production, because Python is very badly designed for that, mainly because of the Python global interpreter lock. 21:05:50 <waldi> zigo: please stop 21:05:52 <zigo> Upstream OpenStack people understood it the very hard way... 21:06:18 <arthurbdiniz[m]> we are not going to have a lot of request, we should worry about deploying something first 21:06:33 <arthurbdiniz[m]> and packagingwill block me in a lot of ways 21:06:53 <zigo> arthurbdiniz[m]: I told you, you don't need to worry much about packaging, I can do that part of the work. 21:07:12 <arthurbdiniz[m]> with docker i can update the up by changing the image tag 21:07:15 <arthurbdiniz[m]> app* 21:07:25 <zigo> Though having your app running as a wsgi app would really be an improvement. 21:08:02 <zigo> What I'm talking about has nothing to do with automatic deployment ... You could still do it if you like. 21:08:23 <serpent> zigo, arthur - I still don't understand what is the discgrement here. 21:08:25 <zigo> arthurbdiniz[m]: Would you be available to discuss all of this later on, away from this meeting ? 21:08:35 <arthurbdiniz[m]> sure 21:08:42 <serpent> Can you describe what each of you want to achieve? 21:08:48 <zigo> arthurbdiniz[m]: Good, so let's decide together later on ! :) 21:08:58 <arthurbdiniz[m]> <serpent "zigo, arthur - I still don't und"> talking about if we should deploy with docker or package 21:09:11 <arthurbdiniz[m]> but we can talk about this later 21:09:13 <arthurbdiniz[m]> lets move on 21:09:17 <zigo> Sure ! :) 21:09:42 <serpent> OK 21:09:46 <zigo> Getting the app fed with data is probably the most important bit we need to fix. 21:10:42 <serpent> zigo: do we want to discuss it here, or do you want to discuss deployment strategies with Arthur first? 21:11:22 <zigo> serpent: If arthurbdiniz is available at another time, then we can work together (him and I) on it without waisting everyone's time. So let's move on ! 21:11:42 <serpent> Cool with me 21:12:05 <marcello^> agree 21:12:16 <serpent> #topic GCE agents 21:12:34 <serpent> #info I started working with Liam on packaging Google agents 21:12:47 <zigo> Good ! Thanks for this. 21:13:11 <serpent> It's bit problamatic as Google changed code to Go and they try to download all dependencies during build 21:13:32 <zigo> Outch ! 21:13:38 <serpent> See latest discussion/flame on debian-devel regarding Kubernetes 21:14:00 <serpent> But at least for know it's not vendored 21:14:19 <noahm> Are they using Go modules? 21:14:21 <serpent> It'll probably take some time, but we're (slowly) working ot in 21:14:44 <serpent> It looks like, but I don't know Go so cannot tell for sure. 21:15:00 <serpent> OTOH learning Go went to top of my ToDo list :-) 21:15:05 <noahm> If there are go.mod and go.sum files, that's a likely indicator. 21:15:15 <noahm> Go dependency management is horrible, unfortunately. 21:15:25 <noahm> It keeps changing, because they keep doing the wrong thing. 21:15:41 <noahm> Modules are an improvement over what came before, but still have problems. 21:15:46 <serpent> For now - nothing for any of you to do, just trying to keep you informed 21:16:32 <kuLa> any idea why they did it? 21:16:56 <serpent> No 21:17:15 <noahm> kuLa: why they made go dependencies the way they did? 21:17:34 <noahm> Because I think Google engineers have forgotten that not all the world is Google. 21:17:47 <kuLa> no, why to switch from py to go, lol, yeah 21:18:10 <noahm> Compiled binaries are super nice, so I don't blame them for switching. 21:18:16 <noahm> And go is a mostly nice language. 21:18:24 <serpent> Should we go to Valgrind? 21:19:07 <marcello^> oh you mean vagrant ? 21:19:34 <serpent> Yes, sorry 21:19:45 * marcello^ is laughing 21:19:56 <marcello^> so there are a couple of things 21:19:58 <serpent> #topic Vagrant 21:20:26 <marcello^> I want to remove vagrant-lxc images creation because there is no maintainer for that 21:20:44 <marcello^> I thought Kurt would do the job but it looks like he lost interest 21:21:51 <marcello^> second topic, I have actualized my pull request for building vagrant libvirt images in FAI 21:22:14 <marcello^> looking forward for reviews 21:22:59 <marcello^> waldi: are you still on it ? I prefered this time to leave the discussion threads of the changes open , so that you can close them if you're satisfied with the code 21:23:40 <marcello^> https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/186 21:24:01 <noahm> I mentioned last month that I'd look. Haven't done so yet, but I'll try to do it this week. 21:24:17 <marcello^> noahm: thanks. 21:24:31 <marcello^> it is not that I am trying to do complicated things 21:25:22 <marcello^> basically I am doing https://www.vagrantup.com/docs/boxes/base.html with FAI on top of the CLOUD classes 21:25:52 <marcello^> third and last vagrant topic, I amthinking about releasing the vagrant boxes from salsa in the future 21:26:20 <marcello^> and my question is how do you store creadentials for uploading to a cloud provider on salsa ? 21:26:28 <noahm> Is the intent to upload the images to Vagrant's own repositories? 21:26:37 <marcello^> I need an API key to be able to push boxs to vagrant cloud 21:26:41 <serpent> If it's build on Debian hardware - it can be official Debian image 21:26:46 <marcello^> noahm: yes 21:27:29 <noahm> Salsa's CI system lets you store parameters, which can then be exposed to jobs as environment variables or files. 21:27:43 <noahm> that's how we manage API creds for azure and AWS currently 21:29:09 <marcello^> how you prevent the API creds being read, it is just a kind of very restricted salsa project ? 21:29:28 <marcello^> being read by everyone I mean 21:29:46 <noahm> Yes; the debian-cloud-images project is public, but the debian-cloud-images-daily project is more restricted. It is where they creds are 21:29:56 <noahm> ...for the daily uploads to the cloud providers 21:30:10 <marcello^> ah you do daily uploads 21:30:13 <noahm> There's also a debian-cloud-images-release project that is used for the actual "release" images. 21:30:34 <noahm> Those are not done daily, but are triggered by hand when a release happens 21:30:46 <serpent> marcello^: we have special restricted project on Salsa to deal with it 21:31:26 <marcello^> OK I get it. I was thinking of doing an automatic release every two week, just like Ubuntu. 21:31:54 <noahm> if you (or vagrant users, I guess) want, you could do a nightly sid upload. 21:32:52 <noahm> A manual release for stable is probably sufficient; we will typically trigger those whenever notable core packages (kernel, libc, etc) change. 21:32:52 <marcello^> noahm: yes people asked for sid vagrant boxes in the BTS IIRC 21:33:03 <noahm> yep, I recall seeing something about that. 21:33:43 <marcello^> ok nothing more to say here from my side 21:34:01 <marcello^> do not forget https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/186 :) 21:34:19 <marcello^> there is a lot of work from me pending behind that 21:34:28 <noahm> ok 21:34:47 <marcello^> thanks 21:35:51 <serpent> Any other topics? 21:36:04 <noahm> AWS IMDSv2 21:36:19 <serpent> noahm: what's about it? Is it related to cloud-init? 21:36:37 <noahm> And anything else that touches the 169.254.169.254 service in EC2 21:37:16 <serpent> #topic AWS and IMDSv2 21:37:28 <noahm> We talked about this a bit last month. 21:37:53 <noahm> There are a number of packages impacted by this change. The default configuration still works, and will continue to do so for the forseeable future. 21:38:20 <noahm> But we are likely to have users who want to enable IMDSv2 for the added security protections it provides. 21:38:50 <noahm> So, for at least some tools, we should try to update stable to add IMDSv2 support. 21:39:07 <noahm> Which again brings us back to the ability to update cloud related stuff in stable. 21:39:17 <zigo> I'm worried by the removal of cloud-utils from Depends. Do we have a matching updated FAI config to get it back ? 21:39:29 <noahm> we already did that a while ago. 21:39:36 <zigo> Great, thanks ! 21:40:13 <noahm> I'm more worried about regressions! We've already seen cloud-init 20.1 break with python 3.8 (granted, this doesn't impact stable, but neither does it inspire confidence!) 21:41:01 <noahm> cloud-init 20.1-2 should be better, and ideally we'll be able to get that released with the next buster update. 21:41:29 <zigo> The release team will not accept unless we do strong enough tests. 21:41:33 <zigo> (on all clouds) 21:41:49 <noahm> The boto related packages are another story. They should also get updated, but we don't maintain those. I haven't had a lot of luck getting much of a response from the usual uploaders 21:42:07 <noahm> cloud-guest-utils needs some upstream work, but it also should get updated. 21:42:47 <noahm> I'm going to stop there for now, but there are also ruby, go, and other SDKs involved. Getting 100% up-to-date support for stable is probably not feasible. 21:43:31 <noahm> So for now I'll try to focus on the packages that we install by default in our images. 21:44:01 <noahm> That's about all I have to say for now. I'll give another update next month, if any progress is made. 21:44:31 <noahm> If you can test cloud-init 20.1-2 anywhere, please do, and let me know how it goes. 21:44:38 <serpent> noahm: thanks 21:44:49 <marcello^> were you able to get the release aproval for other cloud-init updates in stable in the past ? 21:44:58 <marcello^> the release team approval 21:45:10 <noahm> marcello^: no, but we did discuss the idea with them 21:45:28 <noahm> They are OK with it, as long as we're super extra careful to not break things. 21:45:44 <marcello^> good to know 21:45:54 <noahm> IIRC it was Sledge who talked to them originally, but he's not involved anymore. 21:46:02 <serpent> Slightly changing topic... 21:46:07 <serpent> #topic Delegates 21:46:28 <serpent> I wrote Sam regarding new delegates, but haven't received any response 21:46:50 <serpent> I guess I'll need to send email to new DPL after election 21:47:36 <marcello^> serpent: IIRC you're the only delegate at the momment ? 21:47:36 <serpent> I feel a bit unease being the only delegate, but with all activity, I guess DPL is busy right now 21:47:41 <serpent> Yes 21:47:52 <noahm> yeah, ping Sam one more time 21:48:04 <noahm> I suspect he just dropped that packet 21:48:16 <serpent> OK, I'll send him email tomorrow or on Friday 21:49:09 <serpent> Any other topics, or should we slowly close this meeting? 21:49:33 * noahm says close it. 21:49:42 * marcello^ is falling asleep 21:50:14 <serpent> Then - good night everyone (or have a good day on other continents) 21:50:23 <serpent> #endmeeting