4
\$\begingroup\$

I am working on a finite element (FE) code and want to provide multiple material models (20+). In FE applications, the computational domain is subdivided into a set of geometrically simple elements, e.g. tetrahedra or hexahedra. I use Workers that carry elementwise computations sing a given material model. Depending on the used material (e.g. compressible vs incompressible) we might end up using different Worker classes, requiring different interface from the material models.

Some of the used material models are meta models and internally use other material models. A good example is the so-called maxwell element which is composed of an elastic spring and a dashpot. We would model the spring using an already existing material model from our library. The problem is that such meta models require an interface of the material model which might be different to the one required by the Workers.

In my current attempt I am using multiple inheritance (MI) in order to implement the different interfaces. Usually I try to avoid MI, but I have't found a better solution.

I use factories to create models and worker objects, returning the base classes ModelABC and WorkerABC, respectively. The concrete classes Worker1 and Worker2 define required interfaces from the material models, namely Worker*::IModel. Similarly, the MetaModel class also defines an interface MetaModel::IModel. Now, the elementary material models Material1 and Material2 inherit the interfaces which I want them to implement (sometimes it physically does not make sense for a model to implement a certain functions, e.g an anisotropic material should not use functions which require isotropy). Here is a demo of the code below.

#include <memory> #include <iostream> #include <string> #include <cassert> class ModelABC; class WorkerABC { public: virtual ~WorkerABC() = default; void evaluate_cell (/*args*/) const {this->evaluate_cell_impl(/*args*/);}; private: virtual void evaluate_cell_impl(/*args*/) const = 0; }; class Worker1: WorkerABC { public: class IModel { public: double get_stress (const double x) const {return this->get_stress_impl (x);} virtual ~IModel() = default; private: virtual double get_stress_impl (const double x) const = 0; }; Worker1 (ModelABC* m); virtual ~Worker1() = default; private: void evaluate_cell_impl (/*args*/) const override {/*uses get_stress*/} std::unique_ptr<IModel> model; }; class Worker2 : public WorkerABC { public: class IModel { public: double get_stress (const double x) const {return this->get_stress_impl (x);} double get_pressure (const double x) const {return this->get_pressure_impl (x);} virtual ~IModel() = default; private: virtual double get_stress_impl (const double x) const = 0; virtual double get_pressure_impl (const double x) const = 0; }; Worker2 (ModelABC* m); virtual ~Worker2() = default; private: void evaluate_cell_impl (/*args*/) const override {/*uses get_stress and get_pressure*/} std::unique_ptr<IModel> model; }; // Have to use this class. class ParameterHandler { public: ParameterHandler(const std::string &s): id(s) {} virtual ~ParameterHandler () = default; const std::string& get_id() const {return id;} // parse, declare, write parameters etc. private: std::string id; }; // Abstract Model base class needed for factories. class ModelABC: public ParameterHandler { public: // some common functions ModelABC(const std::string &s): ParameterHandler(s) {} virtual ~ModelABC() = default; }; class MetaModel: public Worker2::IModel, public ModelABC { public: class IModel { public: double get_energy (const double x) const {return get_energy_impl(x);} virtual ~IModel() = default; private: virtual double get_energy_impl(const double) const = 0; }; MetaModel (const std::string &s, ModelABC* m) : ModelABC(s), model(dynamic_cast<IModel*>(m)) { if (model == nullptr) throw; std::cout << "Meta model uses: " << m->get_id() << std::endl; }; virtual ~MetaModel() = default; private: double get_stress_impl (const double x) const override {return x*x;}; double get_pressure_impl (const double x) const override {return model->get_energy(x)/x;}; std::unique_ptr<IModel> model; }; // Can only be used by Worker class Model1: public ModelABC, public Worker1::IModel { public: Model1(const std::string &s): ModelABC(s) {} virtual ~Model1() = default; private: double get_stress_impl(const double x) const override {return x;} }; // Can be used by MetaModel, and Worker2 class Model2: public ModelABC, public Worker2::IModel, public MetaModel::IModel { public: Model2(const std::string& s): ModelABC(s) {} virtual ~Model2() = default; private: double get_stress_impl(const double x) const override {return 2*x;} double get_pressure_impl(const double x) const override {return x;} double get_energy_impl(const double x) const override {return 4*x*x;} }; // example.cc inline Worker1::Worker1 (ModelABC* m) : model(dynamic_cast<IModel*>(m)) { if(model == nullptr) throw; std::cout << "Created Worker with ModelABC: " << m->get_id() << std::endl; } inline Worker2::Worker2 (ModelABC* m) : model(dynamic_cast<IModel*>(m)) { if(model == nullptr) throw; std::cout << "Created Worker with ModelABC: " << m->get_id() << std::endl; } 

What do you think about the use of MI in the above case? Is it a bad design choice?

\$\endgroup\$
0

    1 Answer 1

    3
    \$\begingroup\$

    Incidentally, kudos to you for using the Non-Virtual Interface idiom! (For readers who don't know what it is, see here and here.)

    To a first approximation, I would say that multiple inheritance is usually a bad design choice, so it probably is bad in this case as well.

    Looking a bit closer, I don't see the point of ModelABC in this architecture. You have ModelABC so that all models can inherit from a single base class... but inheritance should be used only when the base class is useful in its own right as a polymorphic interface. The reason to have an AnimalABC base class is that you're going to be writing a lot of polymorphic (generic) code that operates on polymorphic (generic) animals via that common AnimalABC API. If all your code is either specific to Cats or specific to Dogs, then you don't need them to inherit from AnimalABC because that common API isn't buying you anything.


    Seems like the constructor Worker1(ModelABC*) should be Worker1(Worker1::IModel*) instead. Either you're calling it like

    Model1 m1; // implements Worker1::IModel and also ModelABC Worker1 w1(&m1); 

    in which case the conversion-to-base followed by dynamic_cast-back-to-IModel is harmless but slow; or else you're calling it like

    ModelABC& m = ...; // comes from somewhere else Worker1 w1(&m); 

    in which case it's still arguably better to push the dynamic_cast into the caller:

    ModelABC& m = ...; // comes from somewhere else Worker1 w1(&dynamic_cast<Worker1::IModel&>(m)); // throw bad_cast on failure 

    This looks ugly, because it should look ugly: this caller has a model-it-doesn't-know-what-it-is, and it's trying to create a Worker1 despite Worker1 being unable to deal with certain kinds of models. We should deal with this ugliness by pushing the dynamic_cast even higher up. This relates to the mantra "Make invalid states unrepresentable." If Worker1 can't deal with anything but Worker1::IModel*, then you statically shouldn't be able to pass it anything but Worker1::IModel*.


    Ownership is unclear here. We create a Worker by passing it a pointer to a Model. From my background knowledge, I would assume that you'll have many workers all working on one model, and so no single worker owns the model; they all observe it. It seems reasonable to guess that perhaps the model should own its workers!

    If that's the case, then the problem gets easy, because you can put the model in charge of creating the exact kind of workers it wants.

    // Old code Model1 m1; m1.addWorker(std::make_unique<Worker1>(&m1)); m1.addWorker(std::make_unique<Worker1>(&m1)); Model2 m2; m2.addWorker(std::make_unique<Worker2>(&m2)); m2.addWorker(std::make_unique<Worker2>(&m2)); // New code Model1 m1; m1.addWorkers(2); // Model1::addWorkers knows to create workers of type Worker1 Model2 m2; m2.addWorkers(2); // Model2::addWorkers knows to create workers of type Worker2 

    Don't bother main with decisions like "what kind of Workers" and then risk rejection in the dynamic_cast. Follow the classical OOP principle of asking the object to create workers, since the object is the one who understands how to do it.

    class Model1 { void addWorker(std::unique_ptr<Worker1>); public: void addWorkers(int n) { for (int i=0; i < n; ++i) { addWorker(std::make_unique<Worker1>(this)); } } }; class Model2 { void addWorker(std::unique_ptr<Worker2>); public: void addWorkers(int n) { for (int i=0; i < n; ++i) { addWorker(std::make_unique<Worker2>(this)); } } }; 

    Once you've done that, you might discover that "creating some workers for yourself" is an operation that would be useful in polymorphic contexts, when you don't know statically what kind of model you're dealing with. That would be the time to bring back the base class ModelABC, so that you could make addWorkers a virtual function:

    class ModelABC { virtual void do_addWorkers(int); public: void addWorkers(int n) { do_addWorkers(n); } }; class Model2 : public ModelABC { void addWorker(std::unique_ptr<Worker2>); void do_addWorkers(int n) override { for (int i=0; i < n; ++i) { addWorker(std::make_unique<Worker2>(this)); } } }; 

    However, you should do that only if you need to call model.addWorkers(n) in a polymorphic context! If you are only ever calling addWorkers on an object whose dynamic type matches its static type — Model1 or Model2 — then you don't need polymorphism and virtual here.


    In fact, when I showed the new code as

    // New code Model1 m1; m1.addWorkers(2); // Model1::addWorkers knows to create workers of type Worker1 Model2 m2; m2.addWorkers(2); // Model2::addWorkers knows to create workers of type Worker2 

    maybe I should have said

    Model1 m1; m1.addWorker1s(2); Model2 m2; m2.addWorker2s(2); 

    As shown, there is no reason for Model1::addWorker1s and Model2::addWorker2s to have the same function name. The only reason to give two things the same name is if you're planning to call them from generic (templated) code!

    template<class M> std::shared_ptr<M> createWithTwoWorkers() { auto model = std::make_shared<M>(); model->addWorkers(2); return model; } std::shared_ptr<Model1> m1 = createWithTwoWorkers<Model1>(); std::shared_ptr<Model2> m2 = createWithTwoWorkers<Model2>(); 

    If you're not planning to send Model1 and Model2 down the same generic codepath like this, then there's really no reason for their APIs even to have any names in common.


    That last bit of advice might go too far for your taste. ;) But the point to take away is: Introduce polymorphism/genericity/overloading when it is useful, and only when it is useful. The reader should always be able to say, "I understand why this base class is here," or "I understand why these two classes' member functions have the same name," or "I understand why these two functions are in an overload set." Which means, "I understand what would break if this were changed."

    This relates to the mantra "Keep it simple, stupid!" Polymorphism/genericity/overloading are awesome tools for solving problems, but it's up to you-the-programmer to make sure that you are using them only when there is a problem to be solved.

    \$\endgroup\$

      Start asking to get answers

      Find the answer to your question by asking.

      Ask question

      Explore related questions

      See similar questions with these tags.