@bahmanm@lemmy.ml cover
@bahmanm@lemmy.ml avatar

bahmanm

@bahmanm@lemmy.ml

Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.

🌎 linktr.ee/bahmanm

Views are my own.

This profile is from a federated server and may be incomplete. View on remote instance

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Thanks for the pointer! Very interesting. I actually may end up doing a prototype and see how far I can get.

bahmanm ,
@bahmanm@lemmy.ml avatar

Good question!

IMO a good way to help a FOSS maintainer is to actually use the software (esp pre-release) and report bugs instead of working around them. Besides helping the project quality, I’d find it very heart-warming to receive feedback from users; it means people out there are actually not only using the software but care enough for it to take their time, report bugs and test patches.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

That’s a great starting point - and a good read anyways!

Thanks 🙏

bahmanm OP ,
@bahmanm@lemmy.ml avatar

I usually capture all my development-time “automation” in Make and Ansible files. I also use makefiles to provide a consisent set of commands for the CI/CD pipelines to work w/ in case different projects use different build tools. That way CI/CD only needs to know about make build, make test, make package, … instead of Gradle/Maven/… specific commands.

Most of the times, the makefiles are quite simple and don’t need much comments. However, there are times that’s not the case and hence the need to write a line of comment on particular targets and variables.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Can you provide what you mean by check the environment, and why you’d need to do that before anything else?

One recent example is a makefile (in a subproject), w/ a dozen of targets to provision machines and run Ansible playbooks. Almost all the targets need at least a few variables to be set. Additionally, I needed any fresh invocation to clean the “build” directory before starting the work.

At first, I tried capturing those variables w/ a bunch of ifeqs, shells and defines. However, I wasn’t satisfied w/ the results for a couple of reasons:

  1. Subjectively speaking, it didn’t turn out as nice and easy-to-read as I wanted it to.
  2. I had to replicate my (admittedly simple) clean target as a shell command at the top of the file.

Then I tried capturing that in a target using bmakelib.error-if-blank and bmakelib.default-if-blank as below.


<span style="color:#323232;">##############
</span><span style="color:#323232;">
</span><span style="color:#323232;">.PHONY : ensure-variables
</span><span style="color:#323232;">
</span><span style="color:#323232;">ensure-variables : bmakelib.error-if-blank( VAR1 VAR2 )
</span><span style="color:#323232;">ensure-variables : bmakelib.default-if-blank( VAR3,foo )
</span><span style="color:#323232;">
</span><span style="color:#323232;">##############
</span><span style="color:#323232;">
</span><span style="color:#323232;">.PHONY : ansible.run-playbook1
</span><span style="color:#323232;">
</span><span style="color:#323232;">ansible.run-playbook1 : ensure-variables cleanup-residue | $(ansible.venv)
</span><span style="color:#323232;">ansible.run-playbook1 : 
</span><span style="color:#323232;">	...
</span><span style="color:#323232;">
</span><span style="color:#323232;">##############
</span><span style="color:#323232;">
</span><span style="color:#323232;">.PHONY : ansible.run-playbook2
</span><span style="color:#323232;">
</span><span style="color:#323232;">ansible.run-playbook2 : ensure-variables cleanup-residue | $(ansible.venv)
</span><span style="color:#323232;">ansible.run-playbook2 : 
</span><span style="color:#323232;">	...
</span><span style="color:#323232;">
</span><span style="color:#323232;">##############
</span>

But this was not DRY as I had to repeat myself.

That’s why I thought there may be a better way of doing this which led me to the manual and then the method I describe in the post.


running specific targets or rules unconditionally can lead to trouble later as your Makefile grows up

That is true! My concern is that when the number of targets which don’t need that initialisation grows I may have to rethink my approach.

I’ll keep this thread posted of how this pans out as the makefile scales.


Even though I’ve been writing GNU Makefiles for decades, I still am learning new stuff constantly, so if someone has better, different ways, I’m certainly up for studying them.

Love the attitude! I’m on the same boat. I could have just kept doing what I already knew but I thought a bit of manual reading is going to be well worth it.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Thanks. At least I’ve got a few clues to look for when auditing such code.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Agree w/ you re trust.

bahmanm ,
@bahmanm@lemmy.ml avatar

Which Debian version is it based on?

bahmanm ,
@bahmanm@lemmy.ml avatar

Thanks! So much for my reading skills/attention span 😂

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Done ✅

Thanks for your interest 🙏


Please do drop a line in either !lemmy_meter or -meter:matrix.org if you’ve got feedback/ideas for a better lemmy-meter. I’d love to hear them!


Oh and feel free to link back to lemmy-meter from Blåhaj if you’d like to, in case you’d prefer the community to know about it.

bahmanm ,
@bahmanm@lemmy.ml avatar

Something that I’ll definitely keep an eye on. Thanks for sharing!

bahmanm ,
@bahmanm@lemmy.ml avatar

RE Go: Others have already mentioned the right way, thought I’d personally prefer ~/opt/go over what was suggested.


RE Perl: To instruct Perl to install to another directory, for example to ~/opt/perl5, put the following lines somewhere in your bash init files.


<span style="color:#323232;">export PERL5LIB="$HOME/opt/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"
</span><span style="color:#323232;">export PERL_LOCAL_LIB_ROOT="$HOME/opt/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"
</span><span style="color:#323232;">export PERL_MB_OPT="--install_base "$HOME/opt/perl5""
</span><span style="color:#323232;">export PERL_MM_OPT="INSTALL_BASE=$HOME/opt/perl5"
</span><span style="color:#323232;">export PATH="$HOME/opt/perl5/bin${PATH:+:${PATH}}"
</span>

Though you need to re-install the Perl packages you had previously installed.

bahmanm ,
@bahmanm@lemmy.ml avatar

I didn’t like the capitalised names so configured xdg to use all lowercase letters. That’s why ~/opt fits in pretty nicely.

You’ve got a point re ~/.local/opt but I personally like the idea of having the important bits right in my home dir. Here’s my layout (which I’m quite used to now after all these years):


<span style="color:#323232;">$ ls ~
</span><span style="color:#323232;">bin  
</span><span style="color:#323232;">desktop  
</span><span style="color:#323232;">doc  
</span><span style="color:#323232;">downloads  
</span><span style="color:#323232;">mnt  
</span><span style="color:#323232;">music  
</span><span style="color:#323232;">opt 
</span><span style="color:#323232;">pictures  
</span><span style="color:#323232;">public  
</span><span style="color:#323232;">src  
</span><span style="color:#323232;">templates  
</span><span style="color:#323232;">tmp  
</span><span style="color:#323232;">videos  
</span><span style="color:#323232;">workspace
</span>

where

  • bin is just a bunch of symlinks to frequently used apps from opt
  • src is where i keep clones of repos (but I don’t do work in src)
  • workspace is a where I do my work on git worktrees (based off src)
bahmanm ,
@bahmanm@lemmy.ml avatar

First off, I was ready to close the tab at the slightest suggestion of using Velocity as a metric. That didn’t happen 🙂


I like the idea that metrics should be contained and sustainable. Though I don’t agree w/ the suggested metrics.

In general, it seems they are all designed around the process and not the product. In particular, there’s no mention of the “value unlocked” in each sprint: it’s an important one for an Agile team as it holds Product accountable to understanding of what is the $$$ value of the team’s effort.

The suggested set, to my mind, is formed around the idea of a feature factory line and its efficiency (assuming it is measurable.) It leaves out the “meaning” of what the team achieve w/ that efficiency.

My 2 cents.


Good read nonetheless 👍 Got me thinking about this intriguing topic after a few years.

bahmanm ,
@bahmanm@lemmy.ml avatar

This is fantastic! 👏

I use Perl one-liners for record and text processing a lot and this will be definitely something I will keep coming back to - I’ve already learned a trick from “Context Matching” (9) 🙂

bahmanm OP ,
@bahmanm@lemmy.ml avatar

I’m not sure how this got cross-posted! I most certainly didn’t do it 🤷‍♂️

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Thanks for sharing your insights.


Thinking out loud here…

In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:

  • In the case of a linear (single branch) timeline you can always “query” by a request ID and order by the timestamps and that’s pretty much what tracing will do too.
  • Things, however, get complicated when you’ve a timeline w/ multiple branches.
    For example, consider the following relatively simple diagram.
    Reconstructing the causality and join/fork relations between the executions nodes is almost impossible using traditional logs whereas a tracing solution will turn this into a nice visual w/ all the spans and sub-spans.

https://lemmy.ml/pictrs/image/9e00ce74-96e5-4961-8579-7a25f48f92ce.png

That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I’m not sure a tracing solution will be able to provide.


they should complement each other

Yes! You nailed it 💯

Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.

bahmanm ,
@bahmanm@lemmy.ml avatar

That was my case until I discovered that GNU tar has got a pretty decent online manual - it’s way better written than the manpage. I rarely forget the options nowadays even though I dont’ use tar that frequently.

bahmanm ,
@bahmanm@lemmy.ml avatar

This is quite intriguing. But DHH has left so many details out (at least in that post) as pointed out by @breadsmasher - it makes it difficult to relate to.

On the other hand, like DHH said, one’s mileage may vary: it’s, in many ways, a case-by-case analysis that companies should do.

I know many businesses shrink the OPs team and hire less experienced OPs people to save $$$. But just to forward those saved $$$ to cloud providers. I can only assume DDH’s team is comprised of a bunch of experienced well-payed OPs people who can pull such feats off.

Nonetheless, looking forward to, hopefully, a follow up post that lays out some more details. Pray share if you come across it 🙏

bahmanm OP ,
@bahmanm@lemmy.ml avatar

TBH I use whatever build tool is the better fit for the job, be it Gradle, SBT or Rebar.

But for some (presumably subjective) reason, I like GNU Make quite a lot. And whenever I get the chance I use it - esp since it’s somehow ubiquitous nowadays w/ all the Linux containers/VMs everywhere and Homebrew on Mac machines.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

Uh, I’m not sure I understand what you mean.

bahmanm OP ,
@bahmanm@lemmy.ml avatar

I think I understand where RMS was coming from RE “recursive variables”. As I wrote in my blog:

Recursive variables are quite powerful as they introduce a pinch of imperative programming into the otherwise totally declarative nature of a Makefile.

They extend the capabilities of Make quite substantially. But like any other powerful tool, one needs to use them sparsely and responsibly or end up w/ a complex and hard to debug Makefile.

In my experience, most of the times I can avoid using recursive variables and instead lay out the rules and prerequisites in a way that does the same. However, occasionally, I’d have to resort to them and I’m thankful that RMS didn’t win and they exist in GNU Make today 😅 IMO purist solutions have a tendency to turn out impractical.

bahmanm ,
@bahmanm@lemmy.ml avatar

Interesting topic - I’ve seen it surface up a few times recently.

I’ve never been a mod anywhere so I can’t accurately think what workflows/tools a mod needs to be satisfied w/ their, well, mod’ing.

For the sake of my education at least, can you elaborate what do you consider decent moderation tools/workflows? What gaps do you see between that and Lemmy?

PS: I genuinely want to understand this topic better but your post doesn’t provide any details. 😅

bahmanm ,
@bahmanm@lemmy.ml avatar

I see.

So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?


Off the top of my head, do you think

  • The sign up process should be more rigorous?
  • The first couple of posts/comments by new users should be verified by the mods?
  • Mods should be notified of posts/comments w/ poor score?

cc @PrettyFlyForAFatGuy

bahmanm ,
@bahmanm@lemmy.ml avatar

Love the attitude 💪 Let me know if you need help in your quest.

bahmanm ,
@bahmanm@lemmy.ml avatar

That sounds a great starting point!

🗣Thinking out loud here…

Say, if a crate implements the AutomatedContentFlagger interface it would show up on the admin page as an “Automated Filter” and the admin could dis/enable it on demand. That way we can have more filters than CSAM using the same interface.

bahmanm ,
@bahmanm@lemmy.ml avatar

I just love the “Block User” feature. Immediate results w/ zero intervention by the mods 😆

Philosophy of coroutines ( www.chiark.greenend.org.uk )

I’ve been using coroutines since I first encountered them in the same book that this author found them in. Unlike him I’ve used them all over the place professionally and in my personal stuff. I prefer them to threads, to FSMs, and to the callback Hell of reactors for most of my work. This article has a good explanation of...

bahmanm ,
@bahmanm@lemmy.ml avatar

I just quote my comment on a similar post earlier 😅

A bit too long for my brain but nonetheless it is written in plain English, conveys the message very clearly and is definitely a very good read on the topic. Thanks for sharing.

No Strings Attached: Enjoy the Freedom of Free Disposable Email ( discuss.tchncs.de )

TempEmailGo understands the importance of privacy. That’s why we offer a hassle-free, registration-free, and cost-free solution to protect your online identity. With our free disposable email service, you can receive emails and verification codes without sharing any personal information. It’s time to take control of your...

bahmanm , (edited )
@bahmanm@lemmy.ml avatar

Nice! Good to see this idea becoming more common 👍

I personally use Firefox Relay which gives me better control for my workflow - I usually need my temporary e-mails to last a bit longer, eg a week or a month.


On another note, the post clickable URL opens the Lemmy instace landing page and not that of the disposable email service.

bahmanm ,
@bahmanm@lemmy.ml avatar

Would be lovely to have a download per release diagram along w/ the release date (b/c Summer matters in the FOSS world 😆)

bahmanm ,
@bahmanm@lemmy.ml avatar

That single line of Lisp is probably (defmacro generate-compiler (…) …) which GCC folks call every time they decide to implement a new compiler 😆

bahmanm ,
@bahmanm@lemmy.ml avatar

A bit too long for my brain but nonetheless it written in plain English, conveys the message very clearly and is definitely a very good read. Thanks for sharing.

bahmanm ,
@bahmanm@lemmy.ml avatar

When i read the title, my immediate thought was “Mojolicious project renamed? To a name w/ an emoji!?” 😂


We plan to open-source Mojo progressively over time

Yea, right! I can’t believe that there are people who prefer to work on/with a closed source programming language in 2023 (as if it’s the 80’s.)

… can move faster than a community effort, so we will continue to incubate it within Modular until it’s more complete.

Apparently it was “complete” enough to ask the same “community” for feedback.

I genuinely wonder how they managed to convince enthusiasts to give them free feedback/testing (on github/discord) for something they didn’t have access to the source code.


PS: I didn’t downvote. I simply got upset to see this happening in 2023.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Mdev
  • SciFi
  • fountainpens
  • test
  • dev_playground
  • announcements
  • vexblue
  • anki
  • pamasich
  • VideoEditingRequests
  • kbinrun
  • All magazines