Conversation
|
I am delighted to see this PR, and see progress being made on the remaining big-endian bugs. I presume these changes were generated with AI, although the PR does not specify. My concern is that neither of us is familiar with Dennis' code and these changes are substantial. And beyond my power to review adequately. Dennis was a wonderful programmer, but not great about documenting code or providing full test coverage. My approach to Dennis' code would be to:
I would suggest that these three things be done before further major edits to the code. Just having the code documented and (to a lesser extent) formatted like the rest of the code would make code review far more effective from those (all of us) who are not familiar with this code. (I have a code coverage report working on netcdf-c, it could be made part of the CI so all could see the code coverage.) I think this PR should be merged, if it passes all tests, but that also effort should be made to improve the documentation and testing of this code. Seems like turning on the dap features in the recently added CI for big-endian would also be a good idea. |
| exit(1); | ||
| } | ||
|
|
||
| static int |
There was a problem hiding this comment.
Documentation for these functions?
|
It remains a draft PR, and while not generated wholly by AI, the changes were AI assisted. Currently, the goal of this PR isn't 'fixing BE issues' as such, but rather exploring the boundaries and capabilities of these systems and the underlying tools (both 'how do Docker Sandbox environments work from a technical standpoint' and 'how do I leverage what I know about these systems in these environments'. I have enough familiarity with Dennis' code to examine the knock-on effects of these changes, when I turn my attention to it. I'll also be reviewing the other PR's in short order. Thanks! |
|
I'm also examining a 'bite sized' workflow, in terms of helping generate a regimented approach to AI-assisted development. Computers are fantastic at generate large volumes of output, it's what makes them amazing :). But when every change must be reviewed manually, there needs to be an approach to keep the workload manageable. Leveraging AI to review large output only shifts the effort up the stack, since now every conclusion of an AI-assisted code review must be reviewed for accuracy, and so on and so forth. So, what do bite-sized manageable workflows look like? That's part of what I'm investigating with this, insofar as what it looks like for me, personally. While this fix seems promising, there were many blind alleys and ultimately, it got us 98% of the way there (including forcing me to review code that provided the context I needed to catch the final issue that the models kept missing). It's a terrifically powerful tool, and I'm having a lot of fun getting back into the command line interface with them, but ultimately we will have to wait and see how things pan out big-picture. This BE issue has lingered, however, and it felt like a good place to start testing the bounds in terms of 'let's give it a difficult issue that it cannot operate on directly'. |
|
Awesome. I'm also very happy to see these difficult, long-standing issues get resolved. The size of a PR is indeed crucial for effective review. I would say this one was about the largest that can be easily reviewed. For review, I think ideally each issue should get a PR and if it is just one line of code changing, so much the better, as being easiest to review. But this of course has to be balanced against practical considerations. But I try to give separate PRs for changes that:
This is why the documentation PRs are broken up instead of being one giant PR with all the documentation changes. |
Testing a potential fix for big-endian remote data tests.