Preparing for Future Requirements
Correct prediction is hard. Before a football match starts, hundreds of scores
could be guessed. Once the finishing whistle is blowed, there is always someone
who have got their predictions hit. This is due to the basic pigeonholes
principle. Another kind of correct prediction happens to some fortune-tellers.
They trick you by using ambiguous words. While words can have multiple
interpretations, the correctness of predicted claims could be confirmed by the
victims who had wanted to believe.
In a new territory full of unknown unknowns, no one can always make the right
decision beforehand. What we can possibly achieve is perhaps to fail less
frequently, by reducing the exposure to uncertainty in the future.
But, how then? There are a few secret sauces in the practice of software
development, which could shed some lights. Agree or disagree, like or dislike,
these panaceas are listed here, take them at your own risk.
Enumerating it all
Implementing all possible functions is a good preparation for the future. A
function not needed now might turn out to be useful, one never knows. A typical
example is Microsoft Office, which encompasses so many functionalities for
productivity, to the extent that you may know only half of the common
functionalities and a tenth of special-purpose ones. From a user's perspective,
installing all would be future-ready; from a developer's perspective, as long
as the product includes all possible functions, you can wait to tell whether
there is any further need to change, probably close to zero.
Such practice leads to "bloatware", in both size and
function, that wastes everybody's resources. Users complain about purchasing
unused functions, and developers couldn't tell whether a function is necessary
or not from the current user bases.
Since you cannot cope with the growing complexity of software product, it is better to go for full customisability.
General-purpose software can do everything unimaginable now. Instead of
enumeration, by inducing rules with wildcard coverage of possible functions,
you can design a general-purpose programming language or a domain-specific one
to describe any possible configuration in the future. To a certain extent, a
Turing-complete language is "all purpose adhesive" to be able to add any
functionality to the software product on demand. In hardware domain, Swiss
knife and 3D printers are good examples of this too.
It is not easy to learn a general-purpose or domain-specific
language. Customisation is transferring the challenge to end-users.
Since you cannot expect users to know everything, it is better to make the software product extensible.
Plug and play
Making the basic function extensible to advanced users, so that they can add
new function or behaviour with respect to a given interface. It is like you
have a standard socket for any future extension that follows the standard to
plug in the functionality, as simple as that. Due to the standard being
followed, the new functionalities will not violate or interfere with existing
functions. Good examples include Eclipse SDK, Web browsers, Linux kernel, etc.
: The basic functions scope the future extensions of the software,
therefore, it is not possible to revolutionise the requirements. For example,
an IDE with extension is still an IDE, a browser with extension is still a
Since the restriction is not always welcome for more ambitious products, we
need to have the capability to adapt.
Team has limited resources. A smart company would design a platform for others,
e.g., Apple's App Store, WeChat's mini-apps, Ali Baba's e-commerce are
platforms that allow third party developers to add building blocks with more
freedom. Without any additional cost, the infrastructure provider of the
platform could nurture a thriving eco-system, on the premise to attract enough
developers to contribute around the ecosystem. The incentives for them are the
benefit of having every other products supporting each other. In a sense,
open-source eco-systems such as Linux are also a platform for inclusive growth.
: The initial investment of a platform is huge, which requires
all necessarily conditions to be ready.
If a small team cannot afford providing a platform, what about an atomic function?
Since "all-inclusive" is not the best for future, an atom functions may serve
the minimal purpose with focus, and serve well for many users. Even if a
quality service is not used today, it may be used tomorrow. Containerisation
(e.g., Docker), microservices, API economy, are new effort towards this goal.
For atomic high-quality functions to be extremely useful, the integration needs
to be standardised to the extent that every thing is exchangeable to everything
else in the ecosystem.
Good wine Is also afraid of deep alley. If you cannot be
found by potential customers, the quality service may not reach to enough
To overcome the marketing issue, software functions could be shared through advertisement.
Network of ads
Naturally, advertisement makes the service known to clients directly. Today's
advertisement is already defragmented so that the services that are closest to
users' demands are more likely to be engaged. Therefore, the best way to sell
the service is beyond talk and do, one must sell it by establishing a brand
amongst the users. Ideally, users would tell followers how good the software
is. Social network is typically more effective to gain trust among them.
The effect of advertisement without a target will diminish as fast
as increasing the social distance to your potential client.
After all, if you would make the product satisfies future users demands, that
is, be prepared to accommodate future changes, then it is best to have
something "invariant", something that is core to the service. Only being able
to adapt to such changes over time, the business can last longer. Identifying
such invariant functions is key to grasp the core requirements.
- Yijun Yu, Yu Lin, Zhenjiang Hu, Soichiro Hidaka, Hiroyuki Kato, and Lionel
Montrieux. 2012. Maintaining invariant traceability through bidirectional
transformations. In Proceedings of the 34th International Conference on
Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 540-550.
- Bihuan Chen, Xin Peng, Yijun Yu, Wenyun Zhao, Uncertainty handling in
goal-driven self-optimization – Limiting the negative effect on adaptation,
Journal of Systems and Software, Volume 90, April 2014, Pages 114-127, ISSN
- Akiki, Pierre A.; Bandara, Arosha K. and Yu, Yijun (2017). Visual Simple
Transformations: Empowering End-Users to Wire Internet of Things Objects.
Transactions on Computer-Human Interaction (In Press).
- Yijun Yu, J. C. S. P. Leite and J. Mylopoulos, "From goals to aspects:
discovering aspects from requirements goal models," Proceedings. 12th IEEE
International Requirements Engineering Conference, 2004., 2004, pp. 38-47.
- Michel Wermelinger and Yijun Yu. 2008. Analyzing the evolution of eclipse
plugins. In Proceedings of the 2008 international working conference on Mining
software repositories (MSR '08). ACM, New York, NY, USA, 133-136.
Office: +44 (0) 1908 6 55562