There's a big caveat when using D*, D*-Lite, or any of the incremental algorithms in this category (and it's worth noting that this caveat is seldom mentioned in the literature). These types of algorithms use a reversed search. That is, they compute costs outwards from the goal node, like a ripple spreading outwards. When the costs of edges change (e.g. you add or remove a wall in your example) they all have various efficient strategies for only updating the subset of the explored (a.k.a. 'visited') nodes that is affected by the changes.
The big caveat is that the location of these changes with respect to the goal location makes an enormous difference to the efficiency of the algorithms. I showed in various papers and my thesis that it's entirely possible for the worst case performance of any of these incremental algorithms to be worse than throwing away all the information and starting afresh with something non-incremental like plain old A*.
When the changed cost information is close to the perimeter of the expanding search front (the 'visited' region), few paths have to change, and the incremental updates are fast. A pertinent example is a mobile robot with sensors attached to its body. The sensors only see the world near the robot, and hence the changes are in this region. This region is the starting point of the search, not the goal, and so everything works out well and the algorithms are very efficient at updating the optimum path to correct for the changes.
When the changed cost information is close to the goal of the search (or your scenario sees the goal change locations, not just the start), these algorithms suffer catastrophic slowdown. In this scenario, almost all the saved information needs to be updated, because the changed region is so close to the goal that almost all pre-calculated paths pass through the changes and must be re-evaluated. Due to the overhead of storing extra information and calculations to do incremental updates, a re-evaluation on this scale is slower than a fresh start.
Since your example scenario appears to let the user move any wall they desire, you will suffer this problem if you use D*, D*-Lite, LPA*, etc. The time-performance of your algorithm will be variable, dependent upon user input. In general, "this is a bad thing"...
As an example, Alonzo Kelly's group at CMU had a fantastic program called PerceptOR which tried to combine ground robots with aerial robots, all sharing perception information in real-time. When they tried to use a helicopter to provide real-time cost updates to the planning system of a ground vehicle, they hit upon this problem because the helicopter could fly ahead of the ground vehicle, seeing cost changes closer to the goal, and thus slowing down their algorithms. Did they discuss this interesting observation? No. In the end, the best they managed was to have the helicopter fly directly overhead of the ground vehicle - making it the world's most expensive sensor mast. Sure, I'm being petty. But it's a big problem that no one wants to talk about - and they should, because it can totally ruin your ability to use these algorithms if your scenario has these properties.
There are only a handful of papers that discuss this, mostly by me. Of papers written by the authors or students of the authors of the original papers listed in this question, I can think of only one that actually mentions this problem. Likhachev and Ferguson suggest trying to estimate the scale of updates required, and flushing the stored information if the incremental update is estimated to take longer than a fresh start. This is a pretty sensible workaround, but there are others too. My PhD generalizes a similar approach across a broad range of computational problems and is getting beyond the scope of this question, however you may find the references useful since it has a thorough overview of most of these algorithms and more. See http://db.acfr.usyd.edu.au/download.php/Allen2011_Thesis.pdf?id=2364 for details.