A couple of years ago, my friend wanted to learn programming, so I was giving her a hand with resources and reviewing her code. She got to the part on adding code comments, and wrote the now-infamous line,

i = i + 1 #this increments i

We’ve all written superflouous comments, especially as beginners. And it’s not even really funny, but for whatever reason, somehow we both remember this specific line years later and laugh at it together.

Years later (this week), to poke fun, I started writing sillier and sillier ways to increment i:

Beginner level:

# this increments i:
x = i 
x = x + int(True)
i = x

Beginner++ level:

# this increments i:
def increment(val):
   for i in range(val+1):
      output = i + 1
   return output

Intermediate level:

# this increments i:
class NumIncrementor:
	def __init__(self, initial_num):
		self.internal_num = initial_num

	def increment_number(self):
		incremented_number = 0
		# we add 1 each iteration for indexing reasons
		for i in list(range(self.internal_num)) + [len(range(self.internal_num))]: 
			incremented_number = i + 1 # fix obo error by incrementing i. I won't use recursion, I won't use recursion, I won't use recursion

		self.internal_num = incremented_number

	def get_incremented_number(self):
		return self.internal_num

i = input("Enter a number:")

incrementor = NumIncrementor(i)
incrementor.increment_number()
i = incrementor.get_incremented_number()

print(i)

Since I’m obviously very bored, I thought I’d hear your take on the “best” way to increment an int in your language of choice - I don’t think my code is quite expert-level enough. Consider it a sort of advent of code challenge? Any code which does not contain the comment “this increments i:” will produce a compile error and fail to run.

No AI code pls. That’s no fun.

  • Sonotsugipaa@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    1 day ago
    // C++20
    
    #include <concepts>
    #include <cstdint>
    
    template <typename T>
    concept C = requires (T t) { { b(t) } -> std::same_as<int>; };
    
    char b(bool v) { return char(uintmax_t(v) % 5); }
    #define Int jnt=i
    auto b(char v) { return 'int'; }
    
    // this increments i:
    void inc(int& i) {
      auto Int == 1;
      using c = decltype(b(jnt));
      // edited mistake here: c is a type, not a value
      // i += decltype(jnt)(C<decltype(b(c))>);
      i += decltype(jnt)(C<decltype(b(c(1)))>);
    }
    

    I’m not quite sure it compiles, I wrote this on my phone and with the sheer amount of landmines here making a mistake is almost inevitable.

        • Sonotsugipaa@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Multiple-character char literals evaluate as int, with implementation defined values - it is extremely unreliable, but that particular piece of code should work.

      • Sonotsugipaa@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        It’s funny that it complains about all of the right stuff (except the ‘int’ thing), but it doesn’t say anything about the concept.

        About the ‘int’ literal (which is not a string): cppreference.com has a description on this page about it, ctrl+f “multicharacter literal”.

    • Sonotsugipaa@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Just tested this: the “original+” code compiles, but does not increment i.

      There were two problems:

      • b(bool) and b(char) are ambiguous (quick fix: change the signatures to char b(bool&) and auto b(char&& v));
      • The concept def. has to come after the b functions, even if the constraint is only checked after both, I was unaware of this (fix: define C immediately before void inc(int&)).