Safety analysis is an activity governed by pragmatism and practicality rather than formal abstractions. Even the concept of a " hazard " has no universally-agreed definition, and there is no deterministic method for finding the set of hazards for a system. In this context, any claims about new challenges or methods must be tested according to their usefulness. In this paper we investigate the concept of a " system of systems ". The rise of network-enhanced capability, particularly in the military domain, has led to differentiation between " large integrated systems " and " true systems of systems ". This distinction has been rightly questioned by researchers who point out that all safety analysis should involve socio-technical considerations, and claim that there is no advantage in treating so-called systems of systems with separate methods. We identify a range of circumstances where existing hazard identification techniques, including those explicitly designed for socio-technical analysis, are unreliable in finding certain types of hazard. We claim that distinguishing these circumstances will improve management of hazard identification and assessment in organizations with multiple interacting equipment programs. We support this claim with observations related to existing difficulties that organizations have with the " system of systems safety " issue. The circumstances we identify are a subset of those commonly labeled as " system of systems ". Broad application of new techniques without an understanding of where they are likely to be most effective will be counterproductive. We recommend a cautious investment in system of systems safety including a strong focus on measuring the costs and benefits of new modeling and hazard identification techniques.