While you have duplicated less code than your first example, I would argue that the using some
and partial
are a bit overkill for this very trivial example, where or
is more clear and succinct. I like your modification if you are writing a function (which you are not) that takes an arbitrary list of factors to test for divisibility, but you aren't do that.
In general, when running from an interactive REPL, it makes no sense to println
your answer. Though you're not writing a function, we usually think of the function producing a value, given some inputs, and the value returned from the function is sufficient. Maybe it seems picky, but since you went to the trouble to factor out a divisible-by
function, it seems odd that you would unnecessarily print something. You will find very few println
s in clojure code, generally speaking. It makes your code harder to test, as you have side-effecting output.
The ->>
threading macro is, again, overkill here in my opinion. The goal here is to sum a list of numbers, and making (reduce + ...
front and center makes that clear. You have obfuscated the intent a bit by threading this, IMO.
The above points may seem a bit picky, but they stem from the fact that your goal here is very simple: get a list of numbers and add them. By threading the code, using partial application, and factoring out a function, you've made the solution overwhelming, compared to what the problem calls for. What you have done is very nice, but I don't think it fits the problem such that at a glance the reader can quickly see the intent of the code. Below is my solution, for reference. It's not fancy, but it tells you what it does rather quickly:
(reduce + (filter #(or (zero? (rem % 3)) (zero? (rem % 5))) (range 1000)))
Another more verbose solution, but also clear:
(reduce + (for [x (range 1000) :when (or (zero? (rem x 3)) (zero? (rem x 5)))] x))
One final note: many of the Project Euler problems are math-heavy and as such, often have a more math-based solution (as opposed to an algorithmic solution), or at least can be optimized in such a way. The wikipedia page has an example using this first problem, where a constant-time (O(1)
) solution can be applied which you would certainly want to use for a data set of sufficiently large size (many millions or billions, perhaps).