BUS (Building Use Studies) methodology

The BUS methodology is the original method of evaluating occupant satisfaction and has been developed over the last 30 years. It is an established, tried and tested way of benchmarking levels of occupant satisfaction within buildings against a large database of results for similar buildings. Results can be used to create solutions to improve the occupant experience and optimise building performance.

The method was developed and refined during the 1990s when it was used for the seminal series of government funded PROBE building performance evaluation studies regularly published in the industry press. It has been used on the Carbon Trust’s Low Carbon Accelerator and Low Carbon Building Programme and also on the Technology Strategy Board’s Building Performance Evaluation programme.

Over 45 key variables are evaluated covering aspects such as thermal comfort, ventilation, indoor air quality, lighting, personal control, noise, space, design, image and needs. Twelve summary variables provide a snapshot of the overall building performance.

Go to resource

(0)Report a broken link

Comments

Please Log In or Sign Up to rate this resource or make a comment.

Wednesday, 28th August 2013

Richard Reid, Arup

In relation to the labour time for filling out the spreadsheet for an offline survey, this is a labour intensive process, but it is also a very useful first opportunity to get a feel for the occupants comments and build a picture of the building.

There is an online survey available (in the 8 different languages) that has been developed as part of the new BUS methodology website and partner portal. When an occupant submits the survey it automatically inputs the results into the excel pro-forma. This still needs to be checked for any completion errors and language issues etc. The online response rate tends to be much lower. We are currently reviewing response rates to try and build a picture and then explore how we can improve this. We have been starting to think about how the online survey could be made more engaging to attract the respondents to fill in. We will be exploring this with our partners in due course.

Regarding internet surveys and the need to walk round the building, obviously this is extremely important and an internet survey may not be suitable, but if the Occupant Satisfaction study is part of a wider Building Performance study then the necessary walkaround/survey would be undertaken anyway, and the internet survey could save time and allow you to get a larger sample.

There is no right or wrong answer in this instance. It is about evaluating the situation on a project by project basis.

Reply

Wednesday, 28th August 2013

Kerry Mashford, National Energy Foundation

Regarding the forgiveness factor and appreciation of architectural aspects of the design, some 14 years ago my own research into the contributing factors to wellbeing in buildings, also covered various aspects of design such as proportion, size of space, legibility of buildings, colour, surface texture, hard / soft surfaces, presence of plants and living things. In my experience getting these right (or more right) has a significant impact on the occupant perception and satisfaction of the building.

Reply

Wednesday, 28th August 2013

Mat Colmer, Innovate UK

Some specific issues from the TSB BPE programme lie around the interpretation of the results in relation to other outputs from the studies. Despite what we hoped was clear training and guidance for project leads prior to undertaking the BUS, the interpretation of the outcomes from the BUS vary from project to project. This is despite us seeking to get consistency across BPE studies.

Clearly with BPE we are looking at the BUS to help contextualise performance data from other aspects of the projects, yet Evaluators on the programme have come across value judgements about the results that could be misleading to any reader who is not fully aware of the context of the building. Such statements include the use of the term “average”, “good” and “bad”. We are clear that the results should be taken in the context of “higher than” or “below” or “the same” as the midpoint of the dataset. It is a value judgement that just because building is ranked positively well against a particular criteria, this does not mean that it is necessarily “good”.

If we’re not careful BUS results will be reported and interpreted inaccurately.

Reply

Wednesday, 28th August 2013

Esfandiar Burman, Aedas

Being in the building on the day of survey and observing how the building is being used definitely is part of the 'context' but not all of it. Especially when it comes to complex non-domestic buildings (and you know the size of the schools we had to deal with!) it is almost impossible to understand and capture all details on the day of survey. That's where the designers come in. Yes there is always a gap between design intents and procurement, but this is even a more valid reason to start from the design intents to understand what went wrong, what could be done and what lessons could be learned for the future projects. Interpretation of BUS without this in-depth knowledge of the building would be difficult and at times misleading. I can understand concerns about designers' beliefs and prejudices. However, designers , and indeed contractors, can provide invaluable input that help us better understand the 'context'.

People may be reluctant to admit to some failure and mistakes for understandable reasons. We are all familiar with the sensitivities associated with POE studies. However, some lessons are learned and turned into tacit knowledge even if people don't admit!

Reply

Wednesday, 28th August 2013

Judit Kimpian, Aedas

We have now conducted five BUS surveys in schools and are in progress with another two for Innovation Centres as part of the TSB BPE programme. We found the surveys extremely helpful and revealing, in particular the comments in the context of the energy consumption study.

One aspect that we had regular discussions with our evaluators about was the impact of the architectural design on building performance. As architects, we are probably biased, but when looking at the results we could not help noticing that there appears to be a connection between occupant satisfaction with the building design and perception of comfort. Two buildings with very similar CO2, humidity and temperature readings returned completely different levels of satisfaction with air quality, temperature and comfort. The building where occupants were not happy and proud of the design appear to be less tolerant of thermal and air quality variations. We called this the ‘forgiveness factor’ in our report.

I am not sure if in the analysis of buildings we pay enough attention to evaluating the impact of ‘architectural’ aspects of the building on comfort and productivity and would be very interested to hear of others’ views on this. After all the BUS is not looking at building services only but occupant experience as a whole. Should we advocate a more systematic inclusion of architectural factors on building performance and occupant satisfaction?

Benchmarking against similar buildings and larger and consistent samples would be helpful.

Conducting the survey in large buildings is extremely labour intensive at the moment – the time is not necessarily in the survey itself but transcribing the results into an excel format from the paper.

It is very helpful to have conversations with the FM department but bear in mind they only deal with problems they tend to have a skewed view of how occupants perceive the buildings. Occupants tend to notice different things.

Reply

Wednesday, 28th August 2013

Colin Grenville, E.on Energy

Principally, Eon's involvement stems from being contracted to retrofit energy conservation measures (often multiple measures) to existing buildings. In the absence of substantial sub-metering we are sometimes limited to a “whole facility” approach to performance measurement where we don’t fully understand the performance of individual improvements. BUS gives us a tool whereby we can not just measure performance in terms of total avoided kWh but also measure in terms of improvements to user satisfaction. This will enable us to meet client energy cost saving targets within an energy performance contract and also demonstrate wider benefits, e.g. tighter control of building temperature, intelligent demand led variable air flow instead of constant volume etc.

Since we hold enduring relationships for perhaps 7-12 years under energy performance contracts I would welcome pointers as to examples where buildings have had several surveys conducted over a period of years. It would be useful to have feedback on what actions building operators have undertaken in response to past survey results – and whether those actions can be linked to subsequent performance improvements or investments. To what extent does BUS drive actual improvements?

Understanding building set-point data for heating/cooling for example and whether or not the building has only natural ventilation or is conditioned would be useful in understanding occupant feedback on their own perceived comfort.

In discussion with building occupants we often ask whether they experience periods of excessive warmth or cooling as a quick pointer as to whether the building services cope and I suspect their answers will be partially reflective of current conditions – weather, occupancy levels etc. Asked the same questions when the building services are creaking as opposed to coping admirably will probably elicit a different response so context is everything. However, given a large enough data set, we will arrive at a range of responses – the middle of which is probably a useful benchmark.

Reply

Wednesday, 28th August 2013

Roderic Bunn, Building Services Research and Information Association (BSRIA)

"Personally, I am more inclined to compare the description of building and occupants’ assessment of it with the design brief and building specification rather than comparing it with a database that does not necessarily represent important characteristic of the individual building."

Do both, not one or the other.

As with many things to do with building performance evaluation, "context is all", which is why, although internet questionnaires might get higher response rates in certain situations, without knowledge of the context, (and the context in place on the day of the survey) one would not be able to interpret the BUS results to any level of detail. The act of carrying out a survey on site, being in the building on the day, to observe, to assimilate and to understand how the building is being used, is just as valuable as the statistical results from the BUS. They complement each other. With that knowledge one can indeed go back to the design intent. But there is risk of a presumption that the design intent has survived the procurement process unscathed!

I have a worry that designers of buildings, or indeed anyone closely associated with a building's procurement, cannot conduct BUS surveys objectively. Vested interests won't be an issue if the results validate the designers' wishes, hopes and beliefs, but they are an issue when the results are unexpected and disappointing. People then look for excuses. In the worst cases they rubbish the method, criticise the surveyor, and disown the results. This is nothing new of course. Designers bank their BREEAM rating and industry awards plaudits first, but by that time it's too late for real feedback to be acceptable unless it matches and validates the opinion of the designers. It's all about image. Disappointing results are commercially toxic to a good image.

So, how can we trust designers to be objective and honest when conducting a BUS survey, and accepting results that don't match their preconceptions?

Reply

Wednesday, 28th August 2013

Esfandiar Burman, Aedas

Using BUS in schools and getting a high response rate can be a bit difficult. Also, depending on the way schools are run teachers may predominantly be based in one teaching space or change classrooms frequently. We have found it useful to append a small copy of the layout plan of the building to the questionnaire and ask the teaching staff to locate their teaching space. This can give us invaluable insight and information about building’s operation especially when technical measurements (temperature, humidity, ambient noise levels, etc.) are carried out as part of a wider post-occupancy study and there is an opportunity to correlate BUS results with technical measurements. However, when teaching staff move frequently it becomes more difficult to interpret BUS results and relate them to spatial and physical characteristics of individual teaching spaces. Nonetheless, BUS results still provide a good overall view of how occupants perceive their building.

Statistical analysis of responses and building benchmarking are of course very helpful. However, the building context should be taken into account in analysing the results. As far as I understand, as things stand now, a school building is not necessarily benchmarked against its peer buildings mainly because the database is still not large enough. This may cause some problems in interpreting the benchmarking outcomes. Personally, I am more inclined to compare the description of building and occupants’ assessment of it with the design brief and building specification rather than comparing it with a database that does not necessarily represent important characteristic of the individual building.

Therefore, statistical analysis and benchmarking should not be considered as the main attribute of the BUS questionnaire but rather the by-products of the process that may be helpful. What I have found more useful though is people’s specific comments about their working space and building. BUS is a good platform to get people’s feedback about their building in a structured manner. Interpretation of this feedback invariably requires good knowledge of the building context. Being too focused on benchmarking may compromise the huge potential of BUS as a diagnostics tool and also a way to compare ‘what a building is perceived to be’ with ‘what it was meant to be’.

Reply

Wednesday, 28th August 2013

Kate Fewson, Closed Loop Projects

I believe that BUS Methodology is the tool to use when looking to get a good overview of how people are finding a building. It is good on its own but better when captured with additional energy and contextual data. Its real strength is the large dataset reinforced by easy-to-understand-at-a-glance graphics; in particular the summary variables and indices. I find the comments that people give are absolutely priceless, adding both richness and a deeper understanding to the survey.

Being an independent body carrying out the survey is important. I have often been asked whether the powers that be will know what a specific respondent has written. Being able to reassure them of their anonymity has helped get more honest feedback from occupants. In terms of BUS Methodology Partners delivering this service, it is an important point to emphasise to clients; our independent and impartial role will benefit the survey. However, if we cannot be on hand, there is definite merit in making sure the building staff who dish out and collect up surveys ensure respondents hand back completed questionnaires in a sealed envelope, to maintain this anonymity.

There have been one or two respondents who have struggled to answer some of the survey questions (the questions that use Scale B). For example, is the air being too still a bad thing? It is not apparent to them that the middle box on the scale means a Goldilocks style “just right”. In those circumstances, it has been really useful be there on hand to chat when giving the questionnaires out and collecting them up in person. I think the other question people struggle to answer, which has been quite rightly pointed out, is the importance of control. Again I think being on hand helps in these situations and I would advocate the paper copy survey over the online version to help get a high response rate from respondents, who have in turn understood the questions.

Reply

Wednesday, 28th August 2013

Adrian Leaman, The Usable Buildings Trust

The aims of the BUS methodology are relatively modest: better feedback on human needs in buildings in forms which are relatively simple to understand and administer, reasonably economical to carry out, but with enough clout to be used in wider and more detailed work if necessary.

Details:
This can be diagnosis of ongoing problems (e.g. tracking down the sources of dissatisfaction with air quality which may require detailed measurement of e.g. VOCs); as part of a wider portfolio of measures (e.g. in the Probe project where the BUS method was used in conjunction with detailed energy measurement and air tightness tests); as a component of PhD research (e.g. examining occupant behaviour in housing); or, as in the TSB project, bringing building evaluation assessment to a broader church. The key here is to balance qualitative with quantitative results with just enough detail to cover major performance topics. We call this: "Need to know not nice to have".

Benchmarking:
It is included because people ask for it, but it is not the main aim. Quite quickly benchmarking leads to quibbling, especially when the results don't match expectations. We try to make the benchmarking as robust and understandable as possible, based on past empirical results (not on guesses). Benchmarking takes you into the territory of large-scale statistical databases and all the associated methodological and resourcing problems that go with them. Attention shifts from what respondents are actually saying about the building, to debates about the ratings given, the differences between the traffic light scores, sample sizes, respondents' abilities to recall and so on. These are all legitimate concerns, of course, and are the subject of constant enquiry and refinement and, crucially, compromise. For example, as policy, we do not 'weight' or 'normalise' scores, because this can baffle people. We know, for instance, that school children or museum visitors or part-time staff will tend to rate the building higher than permanent staff, because they have less experience of the conditions or do not take much notice of them or give an answer which they think is expected of them. We use permanent staff as the basis for the benchmarking because the results are more reliable and consistent.

Validation:
This is the question which crops up most frequently, especially from academic researchers who have to justify their work against strict criteria, often derived from experimental science. The BUS method is an example of real-world research, so brilliantly covered in the book of the same name by Colin Robson. To validate in the strictest sense you need to show that the results from one sample survey tally with those from another survey in the same building, so that there is consistency and 'robustness'. You can, in other words, trust the method. To do this properly is expensive and we have never had the resources to do it. But we have carried out repeat surveys and published the results. If you need convincing about validation look at BORDASS W. and LEAMAN A., Test Of Time, CIBSE Journal, March 2012, pps 30-36, an update of the performance of the Elizabeth Fry Building, University of East Anglia, first studied in 1998 in the Probe series of post-occupancy studies, or BUNN R., Charities Aid Foundation: Project Revisit, Delta t, BSRIA, October, 2008, pps 6-10. In both cases the consistency over time is remarkable.

Generic questionnaires:
We use a standard, tweakable questionnaire because this is helpful in co-ordinating the benchmarking but it is also much more affordable. We ask, for example in non-domestic buildings, about conditions for respondents at "their normal desk or work space". Most people will normally work at a desk, but not all. Some may have two 'normal' work spaces or more, classrooms, or laboratories, or courtrooms. Sometimes, we may have to administer two questionnaires: one for the office desk or one for the laboratory, for instance. As circumstances differ widely, and this is the real-world, there is an element of busking and improvisation involved. However, if you tailor the questionnaire too closely to a particular building or building type, you lose the ability to compare with others. What do you do, for instance, if you are studying a fire station, and you have no other fire stations in the dataset? Do you produce a one-off special for fire stations, or use the generic questionnaire, noting carefully the context and the comments made. We do the latter.

Residential housing:
That said, the housing questionnaire is crafted differently because housing throws up a whole range of different challenges. We only started work on the housing questionnaire in 2004 because up to that time there had been virtually no interest in, or funding for, building evaluation of housing. The exception was Fionn Stevenson's work, and Fionn helped up put together the housing version. We wanted to make some of the questions cross over from the non-domestic to the domestic version for comparison purposes, but also to ensure that the housing questionnaire was short, practical and as easy to administer. Housing fieldwork is often much more challenging, so it is usually harder to collect the data. A good example of the BUS survey in action in housing is Lancaster CoHousing Project: post occupancy evaluation, Green Building Journal 24, Summer 2013 (I've attached a (crude) scan of this for your personal use as it may be tricky to get hold of). As the housing work is still relatively new (progress is measured in decades!), the experiences of fieldworkers in carrying out the first generation of surveys is importance to us. So reflection on their experiences is sure to follow.

Future development:
As the BUS method has been around since 1985 when it was introduced as part of a large-sample study of 'sick' buildings, its many users have contributed to the gathering momentum towards routine building evaluation studies. This is not just about appropriate survey methods, but a more responsible approach to building assessment which, at least, is:

1. Grounded in evidence or, at least reasonable probabilities of accuracy if based on models (that means continuous calibration and cumulative evidence).

2. Clear about what is meant by a 'building' in a wider systems framework. This not just about 'design' and the pre-occupations of designers, it's about social and environmental consequences and continuous improvement.

3. Guaranteeing intelligible feedback which can be acted on so that it cannot be ignored, especially by policy makers and government. "Evidence-based policy not policy-based evidence!" to hijack an increasingly used catchphrase.

4. Investigates whether human needs are met. Obvious, and basic, but still the last rather than the first resort for many.

6. Embedded in professional learning at all levels.

7. Open-source, available to all, or at least to as many as practical.

8. Cross-practice and multi-professional just as we are trying to do with the BUS method.

Reply

Wednesday, 28th August 2013

Ranald Lawrence, Bennetts Associates

1. As the most consistent and thorough tool currently available it should be commended to clients as a minimum standard for assessing occupant satisfaction. It is very useful that it takes account of quantitative/qualitative feedback.

2. It is important that the database of results grows as quickly as possible. It is also important to ensure the questions are broad enough to be applied to all users generally.

I do think there is a semantic issue with how the methodology attempts to measure comfort generally. To take temperature as an example – two questions are asked on the 1-7 scale: ‘too cold’ to ‘too hot’, and ‘uncomfortable’ to ‘comfortable’. These are in effect measuring the same thing – the pejorative “too” implies discomfort, albeit the second question does not differentiate between either extreme. My suggestion would be that questions based on scales similar to the ASHRAE thermal sensation scale of ‘cold’ to ‘hot’, and the Nicol scale of ‘cooler’ to ‘warmer’ (in response to ‘how would you prefer to feel?), would be less ambiguous, as it should not be taken for granted that the preferred temperature is a neutral 4.

Similar wording improvements might be made to questions about light and noise (it also occurs to me that the measurement of sunlight is only considered in relation to ‘glare from sun and sky’ on a scale of ‘none’ to ‘too much’ – which might be taken to imply that sunlight is always a source of discomfort when some sunlight animating spaces away from desks might be considered a positive).

The split between summer and winter is useful but it may be valuable to question whether responses will be affected by the specific conditions at the time the survey is taken. In the longer term, if environmental data was collected at the time of each survey it may be possible to derive correction factors based on deviation from a standard average.

Other points:
1. Control over cooling/lighting etc. It would be interesting also to find out user’s attitudes to the responses to these questions (from ‘no control’ to ‘full control’). Would more control be preferred? This might be addressed by a question at the end similar to the ‘Importance of heating, cooling etc.’ question that asks the ‘Importance of control over heating, cooling etc.’, which may vary from building to building depending on the environmental and servicing strategies employed.

2. It might be of value to attempt to gather more data on users attitudes towards perceived feeling of ‘connection’ to the outdoors – a significant variable that affects servicing strategies – that may present a major psychological variable in the perception of comfort.

3. A general point – it is very difficult to use BUS alone to try and draw conclusions about different spaces within one building – beyond relying on a superficial analysis of the magnitude of the range of scalar responses from different users and the individual comments received – but this is perhaps the most useful information for learning design lessons for the future. While not replacing the data BUS collects for comparative benchmarking, independent POEs tailored to individual projects can still significantly enhance BUS by providing environmental data to sit alongside individuals responses that can illuminate the diversity (or lack of diversity) of interior environments in a building, how well they are tailored to different needs, and how this affects occupant satisfaction.

Reply

Wednesday, 28th August 2013

Kerry Mashford, National Energy Foundation

Getting a good response rate for a BUS is not easy and not to be underestimated. Even with guidance and training to reinforce this, teams undertaking BUS as part of the Technology Strategy Board Building Performance Evaluation projects have had variable success. In both domestic and non-domestic projects there are numerous practical challenges to overcome – but addressing them to get the best possible response rate is worthwhile.

Undertaking a BUS in isolation is useful, but also, a missed opportunity. To understand and appreciate the user / building interaction it is important to capture other aspects. So far, we don’t know how many of these are significant, or to what extent. But if we capture them as we go along, when the data set is large enough, we will be able to conclude which data are most significant.

Contextual info that is recommended to capture at the same time includes:
- Internal and external temperature, ambient barometric pressure and humidity (air quality if possible).
- Ambient noise levels internally and externally
- Weather conditions – rain / sun/ cloud / wind
- Day of the week
- Timing relating to the calendar of the organisation being surveyed – e.g. end of term / end of quarter /financial year
- Other salient factors creating stability or change within an organisation

Additionally, tracking BUS results over time, say, throughout the year and / or at the same time year on year may provide insight.

Different users experience the same building in different ways – the obvious one is transient users – shoppers, hotel guests etc., but full-time users will have different experiences depending on their duties.

Cross referencing BUS results with other data and investigations about building performance leads to an understanding, not only of how the building is working and serving its occupants, but why – and hence what to do next.

Reply

Wednesday, 28th August 2013

Richard Reid, Arup

With BUS (as well as being able to compare how occupants perceive before and after interventions/changes in a building, or a before and after survey when moving buildings) a big advantage is that it uses a questionnaire with the same core questions that allows us to benchmark the performance of the study building against other similar buildings in the benchmark whether that be internationally or here in the UK. There are now approaching 700 buildings in the database.

Some of the most useful discussions to have are with the building managers, estate managers and sustainability managers/directors as they see the benefit of understanding how the occupants perceive the building to inform decisions moving forward (e.g. estate rationalisation, moving to a new building).

Another argument that could be used to counter this, is that if the sample size is big enough and you get a good response rate, then it will be obvious to identify the individuals who are not being objective and are the exception rather than the rule. Looking not just at the quantitative results, but at their comments as well can help to see the issues people are experiencing and identify this.

Reply

black architecture logo

Tuesday, 30th July 2013

Paul Hinkin, black architecture

I would go a bit further than Richard's comments and suggest that if we are going to deliver user centered design solutions then we need to design buildings from the inside out. This would challenge the current fashion for iconic architecture that is sold on image rather than philosophy. We designed our project for Catholic Aid for Overseas Development (CAFOD) and have continued our relationship with then for three years since they moved in. They undertook their own satisfaction surveying in their previous premises and have repeated the exercise twice since moving into their new building. The results have shown that their satisfaction with their new building has improved over the first eighteen months of occupation as they have become familiar with their new open plan working environment.

We are trying to encourage other clients to undertake similar exercises, although we have experienced some resistance from HR managers who are concerned that people will simply moan about their existing accommodation rather than providing objective criticism. I guess that this is simply a problem of lack of familiarity and as BUS methodology becomes more widely known.

Reply

Tuesday, 30th July 2013

Richard Reid, Arup

Buildings exist to meet the needs of the people that use them. Therefore as designers, contractors, developers, operators and managers we have a responsibility to ensure that the buildings we come in to contact with meet this requirement. To do this we need to learn and listen to the occupants that are using buildings and understand what is good and bad.

The introduction of carbon reduction targets and the need to reduce consumption (due to cost and limited resources) means the energy performance of buildings is critical. Closing the energy ‘performance gap’ is essential to helping us meet these targets and needs, however it is something we have a responsibility to do anyway to deliver on our promises and to educate our clients.

Energy performance evaluation cannot however be completed in isolation, and is imperative that it is evaluated together with understanding occupant satisfaction. Only when these two pieces are combined and delivered in harmony to create true ‘Building Performance Evaluation’ will we have the feedback needed to meet the needs of the users and clients who the buildings are created for in the first place!

So that’s why I think evaluating occupant satisfaction is so important. The BUS methodology give us a way of evaluating this in a structured way and benchmarking the results against a large database of buildings. This database is not big enough though, and we need to grow it for the industry to truly understand how occupants perceive our buildings. That’s why Arup took the decision to launch the BUS method to a partner network to make it available to everyone, and to ensure the funding is there so it can be continually developed and improved for the industry as whole.

Reply

Tuesday, 30th July 2013

Tamsin Tweddell, Max Fordham

Based on having completed my first 3 surveys, I would recommend it to clients on the basis that it is the best tool available for assessing building occupant satisfaction and provides both quantitative and qualitative feedback. It is easy to think you can devise your own questionnaire and avoid the need to pay a licence fee, but I think the advantage that the BUS methodology has over this is that the results are benchmarked against other buildings and that without this comparative benchmarking the subjective responses are rather meaningless. Also because the survey has been developed over a period of time, the questions have been road tested and refined. There is a skill to writing survey questions that do not bias the responses and the DIY approach risks doing this.

I have encountered some challenges in using the survey in schools.
• My initial reaction to the questions was that several were not appropriate for teachers in classrooms. For example, "Do you sit near a window?" or "how many hours do you spend at your desk?". I think you risk alienating respondents when they feel the questions are not tailored to them. To a certain extent this can be overcome by speaking to individuals as you distribute the surveys. The counter argument to "It's not tailored to us" is that it needs to be fairly generic to have a large enough data set for meaningful comparisons.
• Timing could be an issue. I've just done surveys on some of the hottest days of the year, and unsurprisingly there was a high degree of dissatisfaction with the temperature which may have affected other results too. The school which in my opinion was one of the best I've ever seen, had an overall rating on the 15th percentile and I'm sure the heat had an influence on this. Maybe it is better to do surveys in spring or autumn to avoid extreme weather, but a school may have many staff who are there for the first time and who won't have experienced all the seasons.
• I have found the logistics of distributing and collecting surveys difficult in these schools. Perhaps easier to walk round an office handing out surveys and speaking to people, but I wasn't allowed to visit teachers in their classrooms and had to distribute in the staff room and rely on respondent to return they forms to a central point. Not ideal and perhaps resulted in lower response rates. Where you have an established relationship with the building occupants or at least the client, this might be easier.

I'm sure some of these issues are part of the learning experience. They are my first impressions.

Reply

Our Sponsors

Ecobuild   Marks and Spencer Mitsubishi Electrica Wates foundation