How do you multiply an integer variable by 16 without using the multiplication, addition or division operator? What if you have to multiply by 15?
How do you multiply an integer variable by 16 without using the multiplication, addition or
division operator? What if you have to multiply by 15?
This is usually done by using bit-shifting operators (<< or >>). When you left shift a variable
by 1 bit, you are essentially multiplying it by 2. Similarly, when you right shift a variable by 1 bit you
are dividing it by 2. So, to multiply a variable by say 16, you can just left shift the variable by 4 bits
(2*2*2*2 = 16). To multiply a variable by 15, you can multiply it by 16 (as above) and then subtract
the original. Divide is very similar, just reversed. This operation may generate
overflows/underflows, etc. A follow-up question can be how would you propose we handle these
situations?
You might get extra points afterward by engaging a discussion on why using bit-shifts these days is
not usually a good idea. At the hardware level, bit-shifts and adds/subtracts are much faster than
multiplication, but modern optimizing compilers are smart enough to figure out the fastest way to
perform mathematic operations. “x * 15” in your code is a lot clearer and less prone to bugs than “x
<< 4 - x”, and modern compilers will generate similar code.
division operator? What if you have to multiply by 15?
This is usually done by using bit-shifting operators (<< or >>). When you left shift a variable
by 1 bit, you are essentially multiplying it by 2. Similarly, when you right shift a variable by 1 bit you
are dividing it by 2. So, to multiply a variable by say 16, you can just left shift the variable by 4 bits
(2*2*2*2 = 16). To multiply a variable by 15, you can multiply it by 16 (as above) and then subtract
the original. Divide is very similar, just reversed. This operation may generate
overflows/underflows, etc. A follow-up question can be how would you propose we handle these
situations?
You might get extra points afterward by engaging a discussion on why using bit-shifts these days is
not usually a good idea. At the hardware level, bit-shifts and adds/subtracts are much faster than
multiplication, but modern optimizing compilers are smart enough to figure out the fastest way to
perform mathematic operations. “x * 15” in your code is a lot clearer and less prone to bugs than “x
<< 4 - x”, and modern compilers will generate similar code.
Comments
Post a Comment
https://gengwg.blogspot.com/