While I agree with the later (or middle?) points, maybe for different reasons or maybe I would have reasoned differently, I mostly disagree with the earlier points.
Any really important comments get lost in the noise
What kind of comments are they using?
When I leave comments on GitLab they're threads that get resolved explicitly. GitHub also uses resolvable threads. The assignee/creator goes through them one by one, and marks them as resolved when they feel they're done with them. Nothing gets lost like that.
I also make use of '⚠' to mark significant/blocking comments and bullet points. Other labels, like or similar to conventional comment prefixes, like "thought:" or "note:", can indicate other priorities and significance of comments.
Instead of leaving twenty comments, I’d suggest leaving a single comment explaining the stylistic change you’d like to make, and asking the engineer you’re reviewing to make the correct line-level changes themselves.
I kinda agree, but I often leave the comment on the/a code in question, and often add a code change suggestion to visualize/indicate what I mean. This comment may stand in and refer to all other occurrences of this kind of thing. It doesn't have to apply exclusively on those lines.
Otherwise you’re putting your colleagues in an awkward position. They can either accept all your comments to avoid conflict, adding needless time and setting you up as the de facto gatekeeper for all changes to the codebase, or they can push back and argue on each trivial point, which will take even more time. Code review is not the time for you to impose your personal taste on a colleague.
I make sure that my team has a common understanding of, and the comments adding sufficient context/pretext to make it clear, that code change suggestions and "I would have [because]" are usually or in general can be freely rejected, unless specified otherwise. Often, comments include information of how important or not changes are to me, in comments themselves, and/or comments summarizing a review iteration (with a set of comments). The comments can also serve as a spark for discussion about solutions and approaches, common goals or eventual goals of the changed code that may be targeted after the code changes currently under review.
Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter
I wouldn't want to do it like that, specifically. It's a question of weighing risks and medium and long term maintainability vs delivery, work, changeset, and review complexity and delay. Rather than "will this work", I ask my self, "is this good enough [within context]".
Leave a small number of well-thought-out comments, instead of dashing off line comments as you go and ending up with a hundred of them
Maybe I've had too many juniors to get into this mindset. But I've definitely had numerous times where I did many comments on reviews, even again on successive iterations. Besides reviewing the code technically, the review can also serve as a form of communication, assimilation, and teaching (project an codebase at hand, work style, and other things).
It's good to talk about concerns, issues, and frustrations, as well as upsides of doing so and working like that. Retrospectives and personal talks or discussions can help with that. Apart from other discussion, planing, and support meetings, the review is the interface between people and a great way to communicate.
That webpage certainly blinds me like a surgeon's light would /s 😏
Looking at the US in particular right now, I'm not confident it would be used on good conscience. Who knows what they want to prosecute. Justice frameworks can only work with confidence in justice.
This explanation sounds fine. I haven't seen an actual link to the content of the agreed upon convention across the linked sites.
The Wikipedia article on United Nations Convention against Cybercrime paints a much more concerning picture.
The convention names four types of crimes in particular, which human rights advocates argue are framed too broadly, applicable to any crime committed using an information or communications technology. Many of the crimes it would apply to have only a thin connection to the kind of serious cybercrime, like ransomware and child exploitation, that motivated the convention.
Several organizations highlight the way the convention's language about human rights protections are largely suggestions left to the discretion of member states, including those with a record of human rights abuses.
Let's hope it's a useful framework countries will still make assessments and restrictions on depending on who they're dealing and working together with. I'm still concerned though.
Why is this community not allowing English language comments when it's seemingly obviously in English?
Visual Studio provides some kind of AI even without Copilot.
Inline (single line) completions - I not always but regularly find quite useful
Repeated edits continuation - I haven't seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they're not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can't be sure it didn't change any of those lines.
Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].
In my company we're still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don't have practical experience regarding any analysis, generating, or chat functionality with project context. I'm skeptical but somewhat interested.
I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.
I use Phind and more recently more ChatGPT for research/search queries. I'm mindful of the type of queries I use and which provider or service I use. In general, I'm a friend of ref docs, which is the only definite source after all. I'm aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn't seem capable to respond to what I am looking for.
demo login says invalid username or password. Is it possible someone changed the password on the demo account?
The entire SDK is programmed in CMake! 😱
… okay, it's git submodules
cdrewind Rewind CDROMs before ejection.
lol wut
One of the two associations is in power and actively dismantling society. The other develops a technical product and runs a Lemmy instance many people and other instances have blocked.
Handling or concluding them a bit differently seems quite fine to me.
That being said, I've seen plenty of Lemmy dev connection criticism on this platform. I can't say the same about FUTO.
No Gotos, All Subs
That's sub-optimal
😏
I don't think Microsoft will hold your hand. It's the local IT or usage support.
In my eyes the main issue is the decision makers falling for familiarity and marketing/sales pushing.
Which makes it even more absurd/ironic that after the switch investment, they invest again into a switch into something that is not really better.
Either way, this time though, there's a lot more relevance and pressure to make a change, and a lasting change. The environment is not the same as before.
I diffusely remember reading about two/twice. But I can't provide sources either.
What is the vulnerability, what is the attack vector, and how does it work? The technical context from the linked source Edera
This vulnerability is a desynchronization flaw that allows an attacker to "smuggle" additional archive entries into TAR extractions. It occurs when processing nested TAR files that exhibit a specific mismatch between their PAX extended headers and ustar headers.
The flaw stems from the parser's inconsistent logic when determining file data boundaries:
- A file entry has both PAX and ustar headers.
- The PAX header correctly specifies the actual file size (size=X, e.g., 1MB).
- The ustar header incorrectly specifies zero size (size=0).
- The vulnerable tokio-tar parser incorrectly advances the stream position based on the ustar size (0 bytes) instead of the PAX size (X bytes).
By advancing 0 bytes, the parser fails to skip over the actual file data (which is a nested TAR archive) and immediately encounters the next valid TAR header located at the start of the nested archive. It then incorrectly interprets the inner archive's headers as legitimate entries belonging to the outer archive.
This leads to:
- File overwriting attacks within extraction directories.
- Supply chain attacks via build system and package manager exploitation.
- Bill-of-materials (BOM) bypass for security scanning.
The attack surface is the flaw. The chain of trust is the flaw/risk.
Who's behind the project? Who has control? How's the release handled? What are the risks and vulnerabilities of the entirely product delivery?
It's much more obvious and established/vetted with Mozilla. With any other fork product, you first have to evaluate it yourself.
You could call yourself enlightened 😏
I strongly disagree.
Coloring is categorization of code. Much like indent, spacing, line-breaking, aligning, it aids readability.
None of the examples they provided looked better, more appropriate, or more useful. None of the "tests" lead me to question my syntax highlighting. Quite the contrary.
By reducing the highlighting to what they seem important, they're losing the highlighting for other cases. The examples of highlighting only one or two things make it obvious. When you highlight only method heads, you gain clarity when reading on that level, across methods, but lose everything when reading the body.
I didn't particularly like their dark theme choice. Their initial example is certainly noisy, but you can have better themes and defaults with more subtle and more equal strength colors. The language or framework syntax and spacing can also influence it.
Bolding is very useful when color categorizes code to give additional structure discoverability, just like spacing does.
I failed the question about remembering what colour my class definitions were, but you know what? I don’t care. All I want is for it to be visually distinct when I’m trying to parse a block of code
Between multiple IDEs, text editors, diff viewers and editors, and hosted tools like MR/review diff, they're not even consistently just one thing. For me, very practically and factually. Colors differ.
As you point out, they're entirely missing the point. What the colors are for and how they're being used.
I would agree, but when I look at it then
They wrote
Feel free to fork the project under a
(yes, the sentence ends with the 'a')
The ZUDoom GitHub project description says
UZDoom is a feature centric port for all Doom engine games, based on ZDoom, adding an advanced renderer, powerful scripting capabilities, and forked under a
It ending with 'forked under a' is probably a reference to that comment? lol, nice reference joke, but I hope they change it after a while, because as a description it's quite confusing.
Great comment on there links two code comment threads I found significant and interesting.
While it was primarily about ethics, it should also be noted that the code was described as being "impressively wrong", as well as not actually compiling. I mean, it basically checked if a theme was dark by if it had the word "dark" in the name - which is not a good heuristic - when better ways of doing it exist.
I created a Nushell plugin in Rust that merely converts between Nushell and BSON data formats.
It works, but I still have a fundamental lack of understanding of the magic abstract generalized data transformation framework/interface.
I wish there were fewer magic conversions and transformations, and less required knowledge of them and calling or knowing the correct ones. Magic traits leading to magic conversions for magic reasons. Or something.