14:30:19 <fao89> #startmeeting Pulp Triage 2020-05-12
14:30:19 <fao89> #info fao89 has joined triage
14:30:19 <fao89> !start
14:30:19 <pulpbot> Meeting started Tue May 12 14:30:19 2020 UTC.  The chair is fao89. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:30:19 <pulpbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:30:19 <pulpbot> The meeting name has been set to 'pulp_triage_2020-05-12'
14:30:19 <pulpbot> fao89: fao89 has joined triage
14:30:38 <fao89> !next
14:30:39 <fao89> #topic https://pulp.plan.io/issues/6702
14:30:39 <pulpbot> fao89: 5 issues left to triage: 6702, 6697, 6696, 6695, 6694
14:30:40 <pulpbot> RM 6702 - alikins - NEW - openapigenerator get_operation multipart workaround causes "cannot instantiate nested serializer as Items" errors
14:30:41 <pulpbot> https://pulp.plan.io/issues/6702
14:30:56 <fao89> #idea Proposed for #6702: accept and add to sprint
14:30:56 <fao89> !propose other accept and add to sprint
14:30:57 <pulpbot> fao89: Proposed for #6702: accept and add to sprint
14:31:27 <ggainey> #info ggainey has joined triage
14:31:27 <ggainey> !here
14:31:27 <pulpbot> ggainey: ggainey has joined triage
14:31:30 <daviddavis> #info daviddavis has joined triage
14:31:30 <daviddavis> !here
14:31:30 <pulpbot> daviddavis: daviddavis has joined triage
14:31:32 <x9c4> #info x9c4 has joined triage
14:31:32 <x9c4> !here
14:31:33 <pulpbot> x9c4: x9c4 has joined triage
14:31:55 <dkliban> #info dkliban has joined triage
14:31:55 <dkliban> !here
14:31:55 <pulpbot> dkliban: dkliban has joined triage
14:32:10 <dkliban> #idea Proposed for #6702: accept and add to sprint
14:32:10 <dkliban> !propose other accept and add to sprint
14:32:10 <pulpbot> dkliban: Proposed for #6702: accept and add to sprint
14:32:18 <daviddavis> +1
14:32:26 <fao89> #agreed accept and add to sprint
14:32:26 <fao89> !accept
14:32:26 <pulpbot> fao89: Current proposal accepted: accept and add to sprint
14:32:27 <fao89> #topic https://pulp.plan.io/issues/6697
14:32:27 <pulpbot> fao89: 4 issues left to triage: 6697, 6696, 6695, 6694
14:32:28 <pulpbot> RM 6697 - dkliban@redhat.com - NEW - pulp_installer doesn't add a newline char in requirements.in
14:32:29 <pulpbot> https://pulp.plan.io/issues/6697
14:32:51 <dkliban> this seems to happen when you also have the client packages isntalled
14:32:54 <dkliban> #idea Proposed for #6697: accept and add to sprint
14:32:54 <dkliban> !propose other accept and add to sprint
14:32:55 <pulpbot> dkliban: Proposed for #6697: accept and add to sprint
14:33:00 <fao89> !accept
14:33:00 <fao89> #agreed accept and add to sprint
14:33:01 <fao89> #topic https://pulp.plan.io/issues/6696
14:33:01 <pulpbot> fao89: Current proposal accepted: accept and add to sprint
14:33:03 <pulpbot> fao89: 3 issues left to triage: 6696, 6695, 6694
14:33:04 <pulpbot> RM 6696 - ironfroggy - NEW - pulp_installer fails to run "Collect static content" task when pulp_source_dir is set
14:33:05 <pulpbot> https://pulp.plan.io/issues/6696
14:33:20 <fao89> it happens when source dir is git url
14:33:33 <dkliban> ah
14:33:41 <dkliban> #idea Proposed for #6696: accept and add to sprint
14:33:41 <dkliban> !propose other accept and add to sprint
14:33:41 <pulpbot> dkliban: Proposed for #6696: accept and add to sprint
14:33:47 <fao89> #agreed accept and add to sprint
14:33:47 <fao89> !accept
14:33:47 <pulpbot> fao89: Current proposal accepted: accept and add to sprint
14:33:48 <fao89> #topic https://pulp.plan.io/issues/6695
14:33:48 <pulpbot> fao89: 2 issues left to triage: 6695, 6694
14:33:49 <pulpbot> RM 6695 - dkliban@redhat.com - NEW - WARNING does not provide enough details to take action
14:33:50 <pulpbot> https://pulp.plan.io/issues/6695
14:34:00 <ggainey> yes please
14:34:03 <dkliban> #idea Proposed for #6695: accept and add to sprint
14:34:03 <dkliban> !propose other accept and add to sprint
14:34:03 <pulpbot> dkliban: Proposed for #6695: accept and add to sprint
14:34:07 <bmbouter> #info bmbouter has joined triage
14:34:07 <bmbouter> !here
14:34:07 <pulpbot> bmbouter: bmbouter has joined triage
14:34:09 <daviddavis> +1
14:34:24 <fao89> hahaha I'm suspecting dkliban made a bot
14:34:29 <dkliban> LOL
14:34:50 <fao89> #agreed accept and add to sprint
14:34:50 <fao89> !accept
14:34:50 <pulpbot> fao89: Current proposal accepted: accept and add to sprint
14:34:51 <fao89> #topic https://pulp.plan.io/issues/6694
14:34:51 <pulpbot> fao89: 1 issues left to triage: 6694
14:34:52 <pulpbot> RM 6694 - bmbouter - NEW - RHSM CertGuard needs to only verify the path of the Distribution.base_path
14:34:53 <pulpbot> https://pulp.plan.io/issues/6694
14:35:03 <dkliban> #idea Proposed for #6694: accept and add to sprint
14:35:03 <dkliban> !propose other accept and add to sprint
14:35:03 <pulpbot> dkliban: Proposed for #6694: accept and add to sprint
14:35:04 <daviddavis> move to certguard project
14:35:06 <bmbouter> I'm fixing this week, please accept add to sprint
14:35:09 <bmbouter> oh yeah move to certguard
14:35:18 <fao89> #agreed accept and add to sprint
14:35:18 <fao89> !accept
14:35:18 <pulpbot> fao89: Current proposal accepted: accept and add to sprint
14:35:19 <pulpbot> fao89: No issues to triage.
14:35:36 <fao89> Open floor!
14:35:39 <daviddavis> I wanted to ask about https://github.com/pulp/pulp_file/pull/391
14:36:09 <dkliban> it's part of this epic https://pulp.plan.io/issues/6707
14:36:15 <daviddavis> I was wondering if we could maybe email pulp-dev since I don't kjow if everyone will this PR
14:36:25 <dkliban> +1
14:36:26 <bmbouter> +1
14:36:34 <ggainey> +1
14:37:07 <daviddavis> fao89: did you want to do that?
14:37:15 <daviddavis> if not, I can
14:37:29 <daviddavis> I'm assuming that the plugin_template will also use poetry?
14:37:37 <fao89> it is not an official PR, the real PR is: https://github.com/pulp/pulp_file/pull/390
14:38:19 <fao89> this is a PoC of using poetry, I intend to start a thread when I have more details
14:38:28 <daviddavis> great, that sounds good.
14:38:37 <dkliban> agreed ... tahnk you for working on this fao89
14:38:45 <bmbouter> yes thank you fao89
14:38:50 * ggainey cheers wildly
14:38:53 <ggainey> :)
14:38:58 <bmbouter> and for the release automation you're doing as part of this also
14:39:00 <bmbouter> we sorely need that
14:39:02 <fao89> the original problem is: I'm having problem with requirements inside setup.py
14:39:12 <daviddavis> yea I saw that
14:39:20 <daviddavis> and I think we've outgrown that approach
14:39:24 <bmbouter> I agree
14:39:37 <fao89> so I started this epic: https://pulp.plan.io/issues/6707
14:39:56 <fao89> and I addressed it in one commit here: https://github.com/pulp/pulp_file/pull/390
14:40:28 <bmbouter> can we check in on the write_only saga next?
14:40:37 <dkliban> sure
14:40:59 <dkliban> though i have a topic i also want to bring up after write_only https://pulp.plan.io/issues/6699
14:41:31 <bmbouter> so username and password got resolved, ty fao89 https://github.com/pulp/pulpcore/pull/695
14:41:42 <dkliban> awesome
14:41:55 * daviddavis cheers wildy
14:41:56 <x9c4> fao89++
14:41:56 <pulpbot> x9c4: fao89's karma is now 54
14:42:03 <bmbouter> I believe this is the next item that needs to be handled https://pulp.plan.io/issues/6691 and it's on sprint
14:42:22 <dkliban> yep ... that makes sense
14:42:26 <ggainey> aye
14:42:52 <bmbouter> is anyone able to work on that?
14:43:15 <x9c4> plan is to change secretCharFielt to normal CharField?
14:43:26 <dkliban> it should be a quick PR to make ... but i need to finish other PRs before i can take on any more work
14:43:30 <daviddavis> are we also removing SecretCharField ?
14:43:31 <dkliban> x9c4: yes
14:43:35 <dkliban> daviddavis: yes
14:43:38 <x9c4> I'll take it.
14:43:43 <daviddavis> x9c4: ty
14:43:44 <dkliban> x9c4++
14:43:44 <pulpbot> dkliban: x9c4's karma is now 43
14:43:45 <bmbouter> ty!
14:43:46 <ttereshc> x9c4++
14:43:46 <pulpbot> ttereshc: x9c4's karma is now 44
14:43:54 <bmbouter> ok next topic?
14:44:00 <bmbouter> x9c4: I can review for that if that is helpful
14:44:29 <x9c4> sure, ill ping you
14:44:39 <dkliban> bmbouter: anything else for write_only at this time?
14:44:55 <bmbouter> no, let's get through that and we can discuss the step after on fri
14:45:02 <dkliban> +1
14:45:22 <dkliban> i would like to discuss a story that was filed by a user https://pulp.plan.io/issues/6699
14:45:48 <ggainey> being able to set that timeout would really help when debugging viewsets
14:46:14 <dkliban> ggainey: i think these are differnt timeouts
14:46:21 <ggainey> dkliban: sad now :)
14:46:29 <daviddavis> this is timeouts for syncing content
14:46:30 <bmbouter> yup different timeouts
14:46:45 <x9c4> so this is relavant to remotes.
14:46:45 <ggainey> ah well - ignore me, carry on :)
14:46:45 <dkliban> ggainey: when debugging a viewset, you should not use guincorn but use pulpcore-manager run_server ... it doesnt' timeout the request
14:47:06 <ggainey> dkliban: ...and now I have Larned Me A Thing - thanks!
14:47:12 <bmbouter> or you can increase gunicorn's timeout
14:47:28 <x9c4> Is he asking for site-wide or remote specific config?
14:47:46 <dkliban> x9c4: i am not sure ... we could ask that on the story
14:47:48 <bmbouter> they are using pulp_rpm but I see value in doing it on BaseRemote
14:48:12 <ttereshc> I remember bmbouter suggestion for each plugin to decide whether it's needed or not, any objections to have it in the core?
14:48:16 <bmbouter> because these settings are meaningful in all sync's because they configure aiohttp which is used in all downloaders
14:48:32 <ttereshc> that answers my question :)
14:48:39 <x9c4> If we are there, we could add retry counts in the same go.
14:48:43 <bmbouter> I'm +1 to adding it to core
14:48:48 <ggainey> concur
14:48:49 <daviddavis> I have no objections to adding it to core but I'm curious about whether he's asking for remote vs system wide
14:48:54 <bmbouter> oh retry counts...
14:49:32 <x9c4> I think, they are hardcoded atm
14:49:44 <dkliban> x9c4: are you talking about retry for 429?
14:50:03 <dkliban> 429 repsonse code from the server
14:50:09 <dkliban> too many requests
14:50:10 <x9c4> I think so.
14:50:11 <bmbouter> these are the situations pulp retries on https://github.com/pulp/pulpcore/blob/master/pulpcore/download/http.py#L15-L32
14:50:18 <x9c4> no use to rety 404
14:50:45 <dkliban> we don't retry on 404
14:50:59 <dkliban> if it's not there it's not there
14:51:20 <daviddavis> retries were discussed recently in https://pulp.plan.io/issues/6589
14:51:41 <bmbouter> we do have an open ask that the sync continue though and that wouldn't be a retry but a "give me everything I can get option" at sync time
14:51:52 <dkliban> daviddavis: it's the same user that filed the story we are discussing and he agreed that retries were nto needed
14:52:05 <daviddavis> dkliban: yup
14:52:11 <dkliban> and that being able to increase timeouts is the solution
14:52:38 <bmbouter> that issue is https://pulp.plan.io/issues/5286
14:52:39 <dkliban> he states so in the last comment after i closed the issue
14:52:59 <daviddavis> yes, this was to the point about adding in retries
14:54:00 <bmbouter> so am I reading correctly that no retries are planned but adding timeouts per https://pulp.plan.io/issues/6699 to BaseRemote would be?
14:54:10 <x9c4> https://github.com/pulp/pulpcore/blob/b94abd64d76ea4554e6750ff38ce458eaa888cc8/pulpcore/download/http.py#L181
14:54:11 <dkliban> that's correct
14:54:23 <x9c4> We do retry 10 times.
14:54:38 <x9c4> So my suggestion was to make that number configurable.
14:55:02 <bmbouter> x9c4: agreed on specific error codes, but you could set that number really high and users would still have the same disconnection errors they ahve today
14:55:03 <dkliban> x9c4: i am on board with that. we just need to make it very clear what conditions that applies to.
14:55:27 <dkliban> that it's only for cases where the server is actually responding with specific error codes
14:55:36 <dkliban> and not for netowkr problems
14:55:38 <bmbouter> in practice the most common case is a TCP connection closed
14:55:48 <bmbouter> and if the server hangs up pulp stops
14:55:56 <dkliban> yep
14:56:04 <x9c4> I see.
14:56:13 <bmbouter> it's confusing so I'm glad we are talking about it
14:56:40 <bmbouter> that line uses the function I linked to which only retries on specific http error codes
14:57:28 <bmbouter> but with mdep's example that's a TCP socket timeout so that's below HTTP in the stack
14:57:48 <bmbouter> but that still won't help yet other users where the server actually closes the TCP connection
14:57:58 <x9c4> so we could try to make http_givup be aware of network timeouts?
14:58:25 <x9c4> additionally to make the timeouts tunable...
14:58:31 <dkliban> x9c4: i don't think that's where it would go
14:58:51 <dkliban> but i don't want to discuss implementation now
14:58:59 <bmbouter> there are two issues with broad retires (like in network errors)
14:59:08 <bmbouter> 1) if the server says stop by hanging up pulp should stop
14:59:50 <bmbouter> 2) if pulp did continue an actual network error would become a very slow failure, tons of retries for each content unit slowsly failing one by one over hours
15:00:56 <bmbouter> the important thing is that pulp saves all the content it did receive so the next time it sync's it doesn't need to redownload what it already got so when users do retry they effectively resume whenever the network and server is ready for that
15:02:18 <dkliban> that all makes sense to me
15:02:35 <dkliban> i would like to bring the conversation back to the original topic of timeouts
15:03:18 <dkliban> bmbouter: could you comment on the story to say that we should add the timeout settings to the base remote
15:03:19 <dkliban> ?
15:03:21 <bmbouter> yeah so that case is that the server hasn't closed the tcp connection but the server is responding very slow with data
15:03:24 <bmbouter> yes I can
15:03:40 <bmbouter> so I think having the client wait longer is a reasonable thing for pulp to be able to do
15:03:57 <bmbouter> these timeouts would allow that to be configured
15:05:02 <dkliban> +1
15:05:04 <x9c4> agreed.
15:05:29 <x9c4> And we fall back to whatever is provided by aiohttp if they are not specified?
15:05:33 <bmbouter> I'm going to look at the aiohttp docs here in a bit to give more specific recommendations on which timeouts and names and such
15:05:50 <bmbouter> yes aiohttp has defaults and if unspecified those are used (and those are used today)
15:06:01 <x9c4> +1
15:08:13 <bmbouter> ggainey, daviddavis, ttereshc: wdyt?
15:08:13 <ttereshc> I have one topic to bring up if everyone is done with issues/stories/prs
15:08:21 <ttereshc> bmbouter, I'm +1
15:08:28 <ttereshc> it makes perfect sense to me
15:08:51 <ggainey> bmbouter: I am great with this whole discussion, y'all seem to have it under control, as they say
15:09:09 <daviddavis> +1 from me
15:09:32 <bmbouter> ok cool I will revise and bring back for review at next open floor
15:09:36 <bmbouter> +1 next topic
15:09:40 <x9c4> So the other thing about the slowly dieing sync is a different story...
15:09:43 <ttereshc> bmbouter, would you like to put what you explained N times into docs? :) I bet you should find it written in different irc logs
15:12:03 <fao89> how about installer triage, change triage query or start a new project?
15:12:26 <bmbouter> ttereshc: we have an issue describing that I think right? part of the thing is that I don't there there is agreement on the position that pulp should not retry
15:12:38 <bmbouter> actually I know there is not total agremeent on that
15:12:59 <bmbouter> I have explained it many times so I would like to document it, but that prevents me I think
15:13:23 <daviddavis> I didn't know there was disagreement
15:13:34 <ttereshc> bmbouter, we have this one https://pulp.plan.io/issues/6624
15:13:54 <ttereshc> I also thought that there is no disagreement
15:14:32 <bmbouter> ggainey: I heard concerns from you before and I haven't heard anything from x9c4 yet
15:14:42 <dkliban> i didn't think there was disagreement eitehr
15:15:09 <ttereshc> maybe we can have a thread on pulp-dev about that
15:15:35 <ttereshc> bmbouter, your arguments for not retrying are sound, imo
15:15:48 <bmbouter> I'm ok with starting that thread, I want to inclusively help improve this
15:15:58 <x9c4> I don't disagree. And if we do any retrying, we should limit the total number per sync.
15:16:01 <bmbouter> I can take that as a today action item and folks can respond without me putting them on the spot here
15:16:10 <ggainey> yeah that sounds good
15:16:16 <daviddavis> great
15:16:20 <ttereshc> thank you
15:16:20 * bmbouter takes action itme
15:16:22 <bmbouter> item
15:16:37 <bmbouter> thank you! I'll also link to 6624 on this thread
15:16:45 <x9c4> bmbouter++
15:16:45 <pulpbot> x9c4: bmbouter's karma is now 261
15:17:26 <bmbouter> ttereshc: did you have a topic besides this one? I also heard one from fao89
15:17:45 <ttereshc> yeah, let's cover the fao89 one first
15:18:09 <bmbouter> I put this as a discussion topic on tomorrow's installer meeting
15:18:30 <daviddavis> I think that makes sense. it's a question for the installer team as to whether they want a new project.
15:18:37 <ttereshc> ok
15:18:52 <daviddavis> also, dkliban is going to start a discussion about triage in general I think
15:19:03 <daviddavis> there was an AI from yesterday's meeting
15:19:10 <ttereshc> I want to ask what do you all think about upgrade tests for pulp 3
15:19:10 <bmbouter> what I heard from this group is we want it to be easy for users to file installer bugs and to minimize the time spent triaging installer bugs when the isntaller team is to do that
15:19:25 <dkliban> daviddavis: yes, i am about to send that email now
15:19:29 <daviddavis> dkliban++
15:19:29 <pulpbot> daviddavis: dkliban's karma is now 463
15:19:32 <daviddavis> bmbouter: agreed
15:20:07 <ttereshc> In rpm we started to run into upgrade issues which users are reporting.
15:20:07 <fao89> having the discussion on the installer meeting sounds good to me
15:21:25 <ggainey> +1
15:24:39 <x9c4> ttereshc, this is upgrading with pulp_installer? I think there is a molecule scenario for upgrades. Probably not with all plugins.
15:26:46 <ttereshc> I guess with installer it makes the most sense but it doesn't matter. Just to test on different content sets that the upgrades are correct
15:27:09 <ttereshc> and if it can be run as a part of daily cron, even better, imo
15:27:14 <bmbouter> ttereshc: the installer team I think will need to implement it, but we could use help planning it
15:27:35 <bmbouter> even a short, simple doc outlining what the desires are of a plugin like rpm, how loaded should it be before upgrading for example
15:27:53 <daviddavis> time check: 3 min
15:28:16 <fao89> we do have upgrade scenarios for release and source installs and it runs on cron job: https://github.com/pulp/pulp_rpm_prerequisites/runs/665547209?check_suite_focus=true
15:29:07 <ttereshc> bmbouter, I can write up a redmine story as a starting point and a place to discuss
15:29:35 <ttereshc> fao89, can you point me to the code where the scenarios are described>
15:29:36 <ttereshc> ?
15:29:56 <ttereshc> any other questions for open floor?
15:30:40 <fao89> it is hard to point it, because it is a mix of pulp_installer, and pulp_rpm_prerequisites
15:30:57 <fao89> I'll gather some links and share with you
15:31:27 <bmbouter> fao89: if we can put it all into one docs that would be ideal
15:31:32 <bmbouter> and then ttereshc could add to it
15:31:34 <bmbouter> maybe a hack.md?
15:31:39 <bmbouter> I have to go to another meeting
15:31:41 <bmbouter> ttyl
15:31:48 <fao89> I was thinking about doing a google doc
15:32:05 <ttereshc> fao89, is there any internal info there?
15:32:37 * ttereshc is trying to understand why google doc
15:34:11 <fao89> because it will be draft
15:34:30 <fao89> #endmeeting
15:34:30 <fao89> !end