July 8, 2009

Customer Centricity

aristotle.jpg

"We are not studying in order to know what virtue is, but to become good, for otherwise there would be no profit in it." - Aristotle

Virtue in Aristole's conception could not be realized theoretically but had to be the product of experiences and direct actions that led to "good" happening. Similarly, theoretical conceptions of "customer-centricity" and "caring for the customer" don't make organizations more focused on great outcomes for their customers; only action born of experience and true learning do--actions that customers themselves recognize as having helped them .


So far, I assume I have agreement (whether or not you are Aristotelian).  But what in Heaven's name you might ask does this have to do with IT organizations? This is the QuestionPro Blog after all not Philosophical Inquiry. The simple answer: everything.


So how indeed do we show that IT organizations are "virtuous?" In order to do so we need to establish three things, the first theoretical and the second and third practical:

  • We care about the customers we serve
  • We act on that caring every day
  • These actions are attested to by our customers as having helped them do something better

Let's go about this in an analytical way. First of all, who indeed are our customers? I argue that our customers are not only the internal people who avail themselves of our services but also the external partners and customers who do business with our companies. The net is, we have customers like anyone else does. Do we care about them? I have never seen an IT organization whose charter did NOT include clear mention of their role of, inter alia, as a services org. Serving is caring as long as you mean it. So far, we are covered on the theoretical side pretty well but as Aristotle admonishes us, the practical side is what matters more .


Do we act on that caring every day? I would argue that the answer is no. Too much of the "services" part of our collective gets intermediated by bureaucracy, abstraction, and fatigue. Again, it is not a question of bad intention--just of lack of clear application of our fundamental premise every day. Do we do good every day? Yes. Do we do enough good every day? No .
Finally, do our customers recognize our actions and are they positive about the help we've rendered? This is the simplest question of all. The answer is a stentorian (and ironic) NO. In my experience it is very rare indeed to find people touting the greatness of IT.


Well, what do we want to do about it? In previous months, I've written that justifying whether "IT matters" is unnecessary and counterproductive because it gives credence to the premise that indeed we don't matter. However, I suggest something very different here: let's make it easier to show that we are virtuous not by defending ourselves but by acting like ourselves. Let's ask our customers what they want and then see how we stack up against what they ask of us.


If I were you I'd go to any crowd-sourcing tool (ie, IdeaScale, etc) and set up an instance to get your customers input on your organization and what they want of you. Solicit input from internal and external customers. And through hearing them and interacting with them, I can bet my bottom dollar you find you are already covering 90% of what they want. And that's a pretty good ratio, certainly befitting virtue.


My own experience dealing with IT organizations has been generally very good. While at Microsoft, I found IT to be very helpful and open to new ideas on how to improve. The same is true in my current company, Ascentium. Am I lucky? Maybe. Am I a customer of IT? Yes. Am I touting IT? Yes.


Wow, we just completed a virtuous cycle. Now let's go do a lot more.

Romi Mahajan is Chief Marketing Officer of Ascentium Corporation. Before joining Ascentium, he spent 7+ years at Microsoft where his last role was as Director of Technical Audience and Platform Marketing. Romi is widely published in the areas of technology, politics, economics, and sociology.

July 6, 2009

QuestionPro Updates 7/7

Fire or no-fire, software development continues on. I wanted to share a few updates to QP that we've pushed out and in the process of updating:


  1. N/A Option on Matrix Style Question

    Issue: In rating (Likert) style scales, sometimes users want to put a N/A Option for each of the items in the matrix. This introduces a reporting issue with respect to calculating the mean. Users who have used our advanced report configuration had to manually update the reporting to exclude the "N/A" option in the mean calculation. We've updated the system make this much easier - see screenshot below:





  2. Row and Column Highlighting on Matrix Style Questions
    We've made both Row level highlighting and column level highlighting by default. We undertook some extensive studies on cognitive stresss and response rates - Visual elements that help users keep track of items dramatically increase response rates.

  3. New Question Type - Net Promoter Score
    Users in the past had to go into our Report Configuration section to enable Net Promoter Scoring model on a particular question. We;ve again made this simpler - Just create a question with the Net Promoter Type and everything else is already done for you.




Power Outage - Recap #fisherplazafire

As many of you are aware we had a day long outage on Friday. I wanted to give everyone a sense of what happened, what we are doing etc.

On Friday (July 3rd) - there was a fire in Fisher Plaza. Fisher Plaza is a "Communication Hub" in the NorthWest - its host to a bunch of datacenters as well as TV and Radio Stations. The fire caused the automatic sprinker system to kick in and essentially shut down power to one of the buildings.

3:00 AM

We learned about this around 3AM Friday morning. All QuestionPro technical staff were online assessing the situation by about 5AM. Since this was a system wide outage (as opposed to a group of servers failing) - we simply had to asses the situation as it developed.

6:00 AM

We get preliminary indication that the root cause of the power failure is the fire and that no-one is allowed to enter/leave the building till Seattle Fire Department does a sweep. At this point we are all online, waiting for Seattle Fire to clear and give the thumbs up. We redirected traffic to QuestionPro.com to a temporary set of servers with a downtime notice. We asked users to check out our twitter feed for updates as well as our status page (status.surveyanalytics.com)

9:00AM

We get indication that Fisher Electrical contractor is trying to get power back online - drying out the equipment to make sure its safe to operate. We start putting updates on the QuestionPro and IdeaScale twiiter accounts (twitter.com/questionpro and twitter.com/ideascale) - both QP and IS are hosted in the same set of cabinets. The entire building is out of power and its a challenge even to get in and out of the building (no elevators, electronic key cards don't work etc.)

We also make the determination that we should wait till about 5PM to see of the power comes back online before moving all the data to another data-center. We also have space in a backup data-center.

12:00PM
Fisher and Internap communicate that they are bringing in mobile generators in Flat Bed Trucks - the plan is to get the Generators fired up and bypass the electrical room altogether (where the water damage was)


5:00PM
Engineers are still working on bypassing the electrical room. We decide to wait for a couple of more hours. There are a lot of other issues with moving all the data into the backup data-center - re-configuring the systems would take us a longer time and we run the risk of not having enough servers to handle to load. Our backup systems are meant to store backups (not run the entire load and applications.)

10:00PM
Power is restored to HVAC (heating and cooling) equipment. Power is then slowly turned on to all the customer (our) equipment.

3:00AM
Power comes back online - Our servers start humming - All QuestionPro technical staff is online by 5AM - We start working on making sure all services come back online properly. - By about 6:45AM we are all back to normal.

Twitter - Works like a charm:
We tried to keep everyone abreast of issues as they developed on twitter. We were issuing updates and following updates using the following hashtags - #fisherfire and #fisherplazafire. If anyone every doubted the uselfullness of Twitter in an emergency - this has proven (to me at least) first hand that Twitter indeed is amazingly useful to communicate in the face of an emergency.

Through twitter we found out that we were not the only ones affected by this fire - Some of the other sites that went offline are:

  • authorize.net
  • bing.com/travel (farecast)
  • bigfishgames
  • Bartell Drugs
  • allrecepies.com


Needless to say, this is pretty big disruption of our services. Both Fisher Plaza and Internap have promised us that they'll come up with a detailed explanation of the issues and steps to prevent such outages in the future. Meanwhile this also exposed a couple of vulenerabilities on our own preparedness. In the spirit of openness I'll talk about them - and not only will we talk about it, we'll also do something about it - and keep you posted on progress.

We will be posting a series of blog posts with the hashtag #fisherplazfire to communicate effectively the steps we are taking to make sure this kind of a distuption does not happen in the future. Like with any system, we cannot make things 100% - but we sure as hell can try.

Short Term Issues:

Communication:

One of the shortcomings we noticed was that our Blog (which is our primary medium of communication) was also hosted within our data-center. This has to change -- we;ll be moving our blog (blog.questionpro.com) to a hosted WordPress - Rob Hoehn is in charge of that and will oversee that. We'll also take this opportunity to segment our blogging - we'll setup three separate blogs (one for QuestionPro, IdeaScale and MicroPoll.)

Automated Phone Message:
We should be able to deliver the same information (like twitter updates) when people call up. We use Angel.com for our hosted PBX system - We'll setup the system so we can give out updates when users call in in times of emergencies like this.

Pre-Planned Error Page:
We should have a system in place to switch our systems to an error page (when all hell has broken loose) - we had to scramble at the last minute to setup a separate system (in our backup data-center) to host the error page itself.


Long Term Issues:

Real-Time Data-Center Redundancy:

We have full redundancy _within_ the data-center. So if any one of our servers dies (hard drive failure, etc.) - other servers pick up the slack automatically. If one of our database-servers crash, we have replicated servers that will come online automatically within seconds. However, if the entire data-center goes offline, our current plan does not have a solution to move to another data-center within minutes. We have full copies of the data stored offsite - but that is only the data.

What we need to get to, is to _operate_ out of a different data-center in case of a massive emergency like this. This will undoubtedly will double our operating expenses, but given then business we are in, we simply need to do this. Over the next three months, we'll be figuring out a solution so that we can sustain turning off power to our primary data-center and things move to our backup data-center.


Finally, I want to acknowledge the patience some of you have shown and understanding many of our customers have shown in the face of this emergency. As the CEO and an Owner of this business I do not take this lightly.

If there is something I can do for you, please feel free to ping me directly - vivek[dot]bhaskaran[at]surveyanalytics[dot]com






Digital Fingerprinting and Sample Quality - Part 2

[This is a guest post from Simon Chadwick, CEO of Peanut Labs, Managing Partner of Cambiar and Editor-in-Chief of Research World.]

This is a continuation of my last post, "Digital Fingerprinting and Sample Quality"...

What, then, can we do about sample quality? As usual where multivariate problems are concerned, the answer is 'a lot of things'. There is no one magic silver bullet to solve the data quality problem, but a series of bullets that, combined, will serve to make things a whole lot better. To name just a few:

  • We can come together as an industry to lay down guidelines and standards for online research and data gathering. An enormous amount of work has gone in to this, including the study by ORQC, the ACE collaborative effort between industry associations and the recently issued ISO standard. These, in addition to the ESOMAR 26 questions, lay the groundwork for real professionalism in the industry, together with learning that can be passed down through the ranks as to how to do good online research in which we can have confidence.
  • As part of this, we can tighten up our panel recruitment practices. It is painful to see reputable firms subscribe to "cash for surveys" websites that register people for multiple panels, all in the service of getting "bodies". The example below is just one of a multitude that exist out there.

survey_lot.png

  • We can start to eat our own dog food. By that I mean that we can knuckle down and start designing surveys that are not only a whole lot shorter, but also more engaging to the respondent. We have known about Flash and other techniques for making surveys more interesting and involving for ages, yet how many of us truly use them on a regular basis? And, please, don't use the excuse that we can't do that on tracking or syndicated studies because we don't want to risk breaking the data trends. Is it better to give clients reliable data or trendable data? And who has not heard of side-by-side trials?

All of that being said, these are longer term solutions. They require long-term adoption and education and will not solve the quality issue as it confronts us today. So what can we do that will move the needle right here and now?

Part of the answer lies in the very thing that gave rise to the problem itself. Data and sample quality issues online are the result of the technology that gave birth to this means of data collection. If we were not online, we would not have the problems with which online presents us. These include problems that are both quantifiable and a little more qualitative or "fuzzy".

Quantifiable issues that we know about and can deal with include:

  • Duplicates: a "duplicate" is someone who tries, knowingly or unknowingly, to take the same survey more than once. At its most innocent, a duplicate is where someone may be a member of more than one panel and be presented with the same survey on both. At the other end of the spectrum are people - and, indeed, survey factories - who deliberately enter surveys multiple times in order to maximize the cash value of their participation. The fact that they can do so is a function of their ability to get around most of the safeguards that are normally put in place. For example, they can delete cookies, come in under different email addresses or change their IP address.
  • Geo-IP violation: simply stated, this involves a person who takes a survey as if they were in one country, but in fact are in another. So, for example, if your survey looks for people in the US, a person from China could take the survey and claim to be in the US. At its most simple (and innocent), this could be a traveling business person. At its most deviant (and more usual) it could be a survey factory in China using either people or bots to take surveys on a fraudulent basis. Simple IP checking can eliminate this unless, of course, the factory is using proxy servers
  • Speedsters: these are people who take a survey and zip through it in record time. Very often, they will straightline through complex grids ("satisficing") or deploy patterns to try and avoid being caught. They will also usually provide garbage answers to open-ended questions.

Then there are the more qualitative or "fuzzy" quality problems:

  • Hyperactives: people who take a very large number of surveys in a given period of time. A now infamous comScore report suggested that 32% of surveys were being taken by 0.25% of the online panel population. Is this a bad thing or not? Do we want to put limits on this type of behavior or not?
  • Cross-Panel Accounts: does it matter if someone is on 6 panels? Some data say 'yes', others (including ORQC) say 'no'. Do we want to flag such people to see if their data differ from others?
  • Repeat Offenders: people who have been flagged in the past as having engaged repeatedly in 'suspect' behavior. Do we want these people in our surveys? Should they be flagged for separate analysis?

These are issues that exist in the here and now. No matter how much education and standards-setting goes on, we have to deal with them today. This is where technology comes in and puts power in the hands of the researcher to make key decisions as to what constitutes quality and what does not.

Enter digital fingerprinting. Digital fingerprinting is not a new technology. It has been used by the financial services sector for some time and, indeed, has been present in research, on a proprietary basis, for a few years. But it was when Peanut Labs introduced its OptimusTM technology in 2008 as an industry-wide solution that it really started to gain attention. Indeed, since then, multiple companies have launched their own versions of the technology, mostly on a proprietary basis (i.e. you have to use our sample or our hosting to get the benefit), and have made DF a standard offering in the quality debate.

Digital fingerprinting is not rocket science (otherwise it could not have been copied so quickly!). All it does is to take the 100-150 data points that a computer puts out when it connects via a browser to the Internet and combine those via an algorithm to produce a unique identifier for that computer that can be referenced every time it seeks to take a survey. These data points include such items as

•    your browser and its version
•    your IP address
•    your computer's configuration
•    the software you are using and their various versions
•    the plug-ins that you have and their versions
•    other downloads that you may have on your machine.

The combination of these data points produces a machine ID that is unique. So unique, in fact, that OptimusTM can detect a machine 98.8% of the time that it comes back and tries to take a survey. Even if you delete your cookies, change your browser and change your IP address, a DF technology such as OptimusTM will nail you.

Why is this important? For starters, it means that we can detect duplicates straight away. If a machine tries to come in and take a survey more than once, DF will know. If a machine comes in and tries to pretend - even via a proxy server - that it is from a geographic location that it is not, DF will know. But there is more than that. By allowing the researcher to set certain variables, the technology will know if a machine is trying to speed through a survey, straightline through answers or provide poor quality open responses. The researcher can set the lowest amount of time that is respectable to take the survey, ask DF to look for satisficing, inspect opens for length and other variables and look out for repeat offenders.

More importantly, DF technologies such as OptimusTM that are applied across multiple data collection sources can enable a researcher to decide whether he or she wants to block machines that have been identified as engaging in suspect behavior, or merely tag them so that they can assess the quality of data that they have provided at the back end of the survey. Peanut Labs' Optimus™ Research Database (ORD) now has the results of 21 million respondents from across the entire spectrum of the survey industry. If one of these has been identified in the past as having engaged in suspect behavior, then they can be eliminated up front from participating in a survey, thus saving time and cost at the back end and improving the de facto quality of the survey itself.

What does ORD tell us about these 21 million respondents? Well,1.5 million were duplicates, 500,000 were Geo-IP violators and 250,000 were speedsters. The overall 'suspect' rate was - you guessed it - 15%.

Digital Fingerprinting is not the solution to online data and sample quality problems. It is a solution, available right now, that can combine with industry initiatives, guidelines, standards and training to produce a quality product. And that is what we, research companies, data collectors and clients all want.

More Info:

[Simon Chadwick is the CEO of Peanut Labs, Managing Partner of Cambiar and Editor-in-Chief of Research World. He has over 30 years' experience in the research profession, both corporately and as an entrepreneur. ]

July 4, 2009

7/3 Outage Issue

A power/electrical issue was reported at our data center in Seattle, Washington at approximately 23:40 PDT on Thursday, July 2nd, 2009. This issue appears to be due to a fire at the facility.

The loss of power for the entire facility caused our sites to be fully inaccessible through 6:42 PDT on Saturday, July 4nd, 2009. Any active surveys, polls, or feedback communities were unavailable during this outage.

Our servers were not damaged by this event and all existing data is completely intact. We made the decision to wait for the facility to come back online, rather than moving to our backup systems.

We sincerely regret any inconvenience this has caused. We understand the importance of your data collection initiatives and aim to do everything we can to provide a reliable service. Unfortunately, this outage was largely out of our control.


http://status.surveyanalytics.com has been updated as well

July 3, 2009

Digital Fingerprinting and Sample Quality - Part 1

[This is a guest post from Simon Chadwick, CEO of Peanut Labs, Managing Partner of Cambiar and Editor-in-Chief of Research World.]

One might be forgiven for thinking that the issue of suspect data and sample quality in online research has really only arisen in the past two years. After all, in that space of time, we have seen associations launching initiatives (including the huge study conducted by ARF's ORQC), task forces springing up, conferences devoted to the issue and the launch of commercial and collaborative solutions - all aimed at bringing about comprehensive resolution to the problem. But, in actuality, worries about data and sample quality have been around for a lot longer - Cambiar's first study of the online industry highlighted this as the top concern for both clients and researchers, and that was back in 2005!

Despite this activity (or because of it?), that concern persists, unabated. The fourth Cambiar study, conducted in February and co-sponsored by Peanut Labs and MROps, demonstrated that sample and data quality remain stubbornly at the top of the list of concerns.

data_quality_concerns.png
 
As a sidebar, it is interesting that full-service market research companies evince much more serious concern about these issues than do clients, despite the recent hype about this being a client-led revolution. Additionally, it is clear that what constitutes quality differs depending where you are on the food chain. For full-service companies, it is defined as "data" or "sample" quality. For data collectors, the issue is much more about survey and questionnaire design. Who is right? Both are.

A literature review of what is out there on online data quality yields a plethora of articles, webinars and presentations. ORQC alone has amassed more than 300 articles on the issue, while studies such as that conducted by Burke in 2007 suggest that some 14% of respondents from online panels are in some way 'suspect'. Indeed, our own data suggest the same level of problems - of over 21 million respondents run through our digital fingerprinting software in the last 12 months across a wide variety of data collection sources, 15% were identified as being 'suspect'.

So what does this mean in the real world? Inefficiency, extra cost and, potentially, wrong decisions based on faulty data. If our clients are paying for the data that we provide them, but a proportion of those data are suspect, the least that can be said is that they are overpaying, since research companies routinely have to oversample in order to compensate for 'duds' in the data set. The research companies themselves are paying more in terms of time and salary to check data at the back end and weed out the duds. And if, heaven forbid, a client makes a decision based on faulty data, then the costs can be astronomical.

So, is this a problem? Yes it is. We can run all the studies we want to try and prove that one or other component issue in data quality 'really doesn't make much difference', but the truth is that the multivariate nature of factors that make for poor data quality means that we don't really know what the impact is on our research and how much it is skewing our results.

In my next post, I'll talk about different solutions to solve these problems...

More Info:

[Simon Chadwick is the CEO of Peanut Labs, Managing Partner of Cambiar and Editor-in-Chief of Research World. He has over 30 years' experience in the research profession, both corporately and as an entrepreneur. ]

July 1, 2009

Crowd Sourcing to Improve the Issue Management Process

While issue management is not specifically addressed by the Project Management Institute (PMI) Project Management Book of Knowledge (PMBOK), it is a key process necessary for effective project management. An issue is something that requires a decision made and associated actions performed. It is a situation that has occurred or will occur, as oppose to a risk that is a potential of a situation to occur. Typically, issues are tracked from a simple whiteboard, to a spreadsheet, to a full scale issue tracking and management system.

Issue management systems serves its purpose. However, it requires issues to be categorized and classified by the project management team. Feedback by the project team or end users require each issue to be assigned and prioritized. Often, team members and end users complain about the lack of transparency on the prioritization process. What gets ranked critical vs. high vs. low priority. In a previous post, I introduced you to using IdeaScale as a tool to solicit customer feedback and ideas. An alternative use of the tool is to implement it as an issue management system. Rather than priorities defined and assigned in a top down approach, using IdeaScale allows you to "crowd source" a bottom up approach.

Here are the conceptual steps to process from issue submission to closing.

  1. Team member (and end user) identifies an issue or or problem that requires attention or decision.
  2. The team member logs onto the IdeaScale portal and submits the issue.
  3. All team members (and end users) access the portal to review the issues list, and vote up or down, the priorities of each issue.
  4. Over time, and with critical mass usage of the portal, priority issues will rise to the top while less important issues will remain at the bottom. Ideas that reach a critical threshold (through the voting mechanism) will be assigned to an owner and tracked as part of the project management process.
  5. After the issue is addressed and resolved, the "idea" is closed by the site administrator.
Develop Issue Management and Escalation Procedures

The first step to building an issue management process is to document the associated procedures. The State of California Office of System Integration defines the issue and escalation process as follows:

The Issue and Escalation Process describes how the project identifies, tracks and manages issues and action items that are generated throughout the project life cycle. The process also defines how to escalate an issue to a higher-level of management for resolution and how resolutions are documented.

Since the focus of this article is to crowd source the tracking of issues, here is a great example by the State of California, Office of System Integration - Issue and Escalation Process.

Implement IdeaScale to Capture and Monitor Issues

As shown below, the IdeaScale entry page can be customized to collect the information you need to properly manage the issues submitted. By default, the required fields include title/subject and description. In this example, I included stakeholders affected, due date, decision required, and suggested action. A category field is used to segregate issues vs. suggestions. As team members and end users submit their ideas, it gets included in the issues list. Other end users and team members can log into the portal and vote up, and down, the issues that are relevant to them.

Issue-Submission.png

For the IdeaScale method of issue tracking and management to work, it requires a critical mass of users. Crowd sourcing a task requires active participation in the community. If only a handful of users actual log in and submit issues and vote, then it emulates the traditional form of issue management. However, if a large enough group of users consistently log in and participate actively, then the concept of crowd sourcing issue management will work.

The bottom up approach of issue management increases transparency in the process. Users are part of the discussion and actually have input through the voting mechanism. The community polices itself and prevents abusers from rigging the system. Through IdeaScales API's, the issue management system can be integrated to the project portals such as SharePoint, or other project management tools.

Bottom Line

As with any tool, whether it be a simply paper list, Excel spreadsheet, or a crowd sourced IdeaScale issue tracking system, it does not replace good project management and communication with the team. Before attempting to implement the described process, determine if the readiness of the project team and willingness to participate in the process.

[Daniel Hoang advises governmental agencies, business, and individuals on performance management, business processes, and strategic planning to improve organizational development and long-term growth. He is an experienced consultant, auditor, and strategic planner, and has over 10 years of online social media and social networking experience.]

June 29, 2009

Finding Representative Traffic for your Surveys

peanut_labs_logo.jpg

Do You Need Representative Sample?

One of the most frustrating experiences for clients and agencies alike is a sample supplier's inability to supply representative traffic into their surveys. Nationally representative sample is relatively easy to achieve when basing it on present quotas, but it is much harder for providers to balance starts based on Age, Gender, Geography, Income, and Education. Typical industry practices are to use "balanced outgo" - meaning that nationally representative batches of e-mails are sent out. The problem with this practice is that most suppliers are not able to control for variances in response rates that can be as high as 20% among different audiences. In the simplest case, you could send 50/50 M/F invitations and end up with 40/60 split upon survey entry (starts). This leads to additional weighting, or a need to over sample.

Balanced Starts
We found that representative, balanced starts are essential in providing the highest levels of quality and accuracy for online research, which is why we came up with Peanut Labs Balanced Starts. This allows us to accurately target and deliver representative starts across a variety of variables and ensures that your sample plan is followed precisely. PL Balanced Starts is NOT a traffic routing system, but rather evaluates known profile data for each respondent at the start of the survey, in real time, and ensures national representation with on time delivery.


More info:

June 26, 2009

Processing Large Amounts of Text

Usually an online survey tends to include both quantitative and qualitative questions. Analysis of the quantitative is obviously quite easy, using such tools as our real time summary report, grouping/segmentation tool, pivot tables, etc.

The qualitative analysis however, is much more challenging. There are numerous routes you can take, all of which involve expensive software or a great deal of time spent coding/tagging the data by hand.

logoAI3.gif

One suggestion we've had from clients to handle the workload of analyzing qualitative data is to integrate QuestionPro with Amazon's Mechanical Turk. If you haven't heard of this service yet, its pretty smart: anyone can submit a request for a task to be completed, while workers can select from the tasks that they would like to get paid to complete.

Mechanical Turk offers an API interface, so naturally, the concept of linking QuestionPro with this API to tag your open ended data is the next logical step.

Would this be something that would be helpful to people? Please let us know by voting/commenting on the IdeaScale idea - we'd love to hear your feedback.

Further, if you'd like to participate in our beta of this tool, send me a note at blog at surveyanalytics.com or give me a call, +1-206-686-7070 ext 10.


More info:

June 24, 2009

Gaining a Deeper Understanding of Causal Data

[This is a guest post from Gary Angel, President of Semphonic, a web analytics company based in San Francisco]

Online survey technology has made available a whole range of analysis and measurement that was really not possible before. From inexpensive primary research to a deeper understanding of your web site audience to a different perspective on web behavioral data, online surveys can contribute mightily to our knowledge.

But online survey analysis doesn't work quite the same way for each of these tasks. When you're doing primary research or audience profiling using online surveys, your biggest concern is probably getting a good sample. Particularly in the early days of the web, most researchers simply discounted online surveys for primary research because the online population was too different. That isn't really true for most companies nowadays - which is certainly one of the reasons why online survey usage has skyrocketed.

But assuming your sample isn't skewed in some fundamental sense, the analysis of online survey data for primary research and audience profiling is essentially identical to the body of techniques developed for offline research.

That isn't true, however, for the very wide and popular range of cases where you want to apply the results of survey research to a deeper understanding of the web site and the behaviors exhibited there. To see why this is so, consider the following example:

commentbox.jpg A media site launched an online survey of visitors. They tracked overall site satisfaction and also the usage of a number of different site areas. They had recently launched a new "comment" functionality on the site that allowed users to submit comments, rate comments, and track their own status as commenters. Tracking this tool in the online survey, they found that the users who generated comments had a significantly higher satisfaction score than the site average.

From this, they concluded that the comment functionality was boosting site satisfaction and was a success.

Sadly, however, this conclusion is simply not warranted. There is no way to determine from the basic facts:

Comment users have a higher sat score than non-comment users (attitudinal)

Or even

Comment users consume more pages than non-comment users (behavioral)

if either relationship is causal. We don't know if commenting self-selects visitors who happen to be more satisfied and consume more content or whether it actually contributes to that relationship.

People who use comment functionality may already be more engaged and have higher satisfaction than those who do not bother. If so, the apparent (and statistically valid) relationship between using comment functionality and satisfaction is non-causal - at least in the direction we are hoping for.

Comments are not driving satisfaction, they are being driven by it.

It's as simple as this. People who are highly-engaged with your site are likely to be more satisfied with it. They may also be more likely to view or post comments. This in no way proves that they are more satisfied because they view or post comments. They may be less satisfied as a result of commenting. They may be more satisfied. There may be zero impact. You just don't know. Looking at the satisfaction scores for each area on your site and inferring causality from them is simply a basic statistical fallacy.

This is an incredibly common source of error when doing web analytics in general and it has migrated seamlessly over into the usage of online survey data. Self-selection is, in fact, a subtle sort of sampling problem where we forget that the sub-populations we are using for an analysis are not random.

I think it's fair to say that a simple majority of all uses of online survey data as it applies to web site performance that I see are nothing more than interpretive errors caused by self-selection.

You can defend yourself against these types of errors, but it takes significantly more work. Internally, you can try to use other variables inside the survey to hold the populations constant across a range of other factors (like intent, brand awareness, overall usage) before you look at comparative satisfaction scores.

Naturally, the quality and size of the survey also affects its analytical strength. While "less is better" (more people will fill it out), a good survey will ask the same questions in different ways, in order to judge the quality of responses.  "Were you able to find what you're looking for on this website?" can be paired with "Was the navigation or search on this website effective?"

Widely disparate answers to these two questions suggest that survey respondents are not really paying attention to what they're answering, and can then be filtered out.  This, of course, is all standard surveying technique.

A different technique is to use behavioral data integration to analyze the population of relatively similar respondents (as discussed in a previous post) if you hold constant for number of visits, engagement milestones and total activity you can often get a good comparative population. Finally, you can use sampling techniques directed to tracking satisfaction of users before and after trying a tool or area (like commenting).

Each of these methods is designed to give you a valid population with which to compare the group who did the activity you're interested in. Of course, each of these is more work than just doing a cross-tabulation between two survey variables. But what works okay for profiling your basic web audience is more than likely to be fundamentally deceptive when applied to a range of analysis that can involve self-selecting behaviors on the web.

As I mentioned above, behavioral analysis, analyzed using "engagement" as part of the analytical process, is just as prone to self-selection as online survey data. Combining the two is often the best way to build a much more comparative population set than either can achieve on their own.  Survey data, when combined with behavioral data, can enlighten marketing and editorial teams about not just what visitors are doing, but also what they're thinking.

Indeed, finding ways to develop better control groups is one of the larger, if somewhat hidden, advantages of combining web behavioral and online survey data.

More Info:

[Gary Angel is President of Semphonic (http://www.semphonic.com), the leading independent web analytics consultancy in the United States. Headquartered in the San Francisco Bay Area, with offices in Washington, D.C. and Boston, Semphonic works with all of the major web analytics tools including Omniture, WebTrends, Unica, Google Analytics and Coremetrics. Semphonic clients include companies like the American Express, Barclays, the BBC, Charles Schwab, Genentech, Intuit, Kohler, the National Cancer Institute, National Geographic, Nokia, and Turner Broadcasting.]

June 22, 2009

At the Bleeding Edge of Online Research

Interesting webinar coming up in regards to the latest trends of online research. Hosted by Simon Chadwick along with other industry experts, they will review and discuss new and innovative solutions for quantitative research.  Topics such as Mass Semiotics, Social Media and predictive market research will be addressed.  Attendees will gain insight into advances in online research.  Register today to stay informed of the latest emerging industry developments.
 
More info:

June 18, 2009

Google is no longer the undusputed king - Twitter Search Rules.

I started using google back in 1997 - when I was a student at BYU - We were trying to solve a programming assignment and one of my buddies told me about this cool search engine called Google - better than HotBot, Lycos or even Yahoo. We searched on Google - found the answer to our question on the first page - since been addicted to Google.

Yes - there is Bing - but seriously - as they say in the MS campus - the only people who use MSN/Live Search are the people who have not figured out how to change the default Home page on IE.

Today - however Google failed me - I use adium (IM Client) to connect to all the messaging networks (Adium connects to Yahoo, MSN Messenger, Facebook etc.) - For the last two or three days Adium has not been able to connect to the Yahoo servers. At first obviously I thought this was some issue with my network. I wanted to see if others are experiencing this same issue and what the solution could be:

As with everthing, I googled it - Could not find anything interesting! There were a couple of results, but they were referencing blogs/content created months ago!

Screenshot:


I then searched for the same search terms on Twitter:





I downloaded the nightly build of Adium - and bliss....


June 17, 2009

Hot Mashups: The IdeaScale API

"...when government makes data available, it makes itself more accountable and creates more trust in its actions," - Ellen Miller, Sunlight Foundation

IdeaScale includes a powerful API right out of the box that lets you dream up application mash-ups that we never even thought of. We're hoping to open up as many of these as we can to get the create juices flowing! What are all the options? Lets review:

XML Data Dump

You can now publish a "snap-shot" of your idea data in an RSS XML format for anyone to download and manipulate (this only applies to non-private portals). Simply switch to your "Reports" tab and then click "Export Data". From there, developers can access the XML download via your community's API page (click "developers" at the footer of your portal).


help-554-1.gif

Developer Page

Speaking of this, you'll notice that every IdeaScale portal includes a developer page. From here, potential developers can request an API key to get access, as well as gain documentation on the latest methods that are available.


Widgets

You don't have to be a code-slinger though to integrate your IdeaScale community with your website, though. There are IdeaScale widgets that are a simple and easy way to collect feedback in almost any siutation. Every IdeaScale community also has a widgets page that include an "Hot Ideas" and the "Daily IdeaFix" (click the "widgets" link in the footer of your community). 

idea_of_the_day.png

Other widgets are accessible right within IdeaScale: click on the "Publish" tab, and then the "Widget" link.  Here you'll have access to code snippets to add a feedback tab and a "mini-view" of your feedback community (both employed on this website on the left and right navigation.


More Info:


June 15, 2009

Real Time Feedback or Where the F$%@ is my Survey?

hair_out.png

I upgraded my iPhone a few weeks ago (not by choice - the phone took a dunk). Right on cue, by the time I got home, a follow up survey from the mother-ship was sent to me asking me about my experience. At the time the survey was sent, I was very happy with the transition to the new phone (I upgraded from my clunky 1st generation iPhone). All my info was restored from backup without a hitch by someone in the store, and thus I gave a glowing survey response.


Treat Your Customer Like a God

Apple is extremely interested in our purchasing experiences. I know this because after a previous shopping experience gone awry, a store manager called me within the hour to follow up (based on my negative response to a survey). I felt like a god being treated with such respect! The manager was actually interested in what I had to say!


Being Treated Like Dirt

Well everything was good until I checked my voice mail and I realized all my saved messages were gone! I was of course furious! How could this obvious oversight be tolerated!? After calling technical support and hitting the genius bar, I soon realized that my previous god-like status was now reduced to another belligerent, babbling customer. That survey I completed awhile back was now inaccurate! I wanted a call from the store manager, but there was no way I was going to get it now. Where the F$#& is that survey you sent me just hours ago!!!???

Now I was pretty much being treated like dirt. My opinion had zero, or probably negative currency. If Apple didn't want to hear from me, if nothing more, I wanted to be heard by other customers so the same fate didn't happen to them! Of course I gave up fighting pretty quickly and moved on: how could I expect a huge company to care about my tiny issue?


Timing is everything

All organizations struggle with this problem: how to turn these fleeting moments of valuable feedback into useful suggestions that translate to ROI? Now that time has passed, I could care less about the issue - I've moved on. I'm not going to bother spending any time trying to get through the store's management.

When something goes wrong for your customer, when would you want to contact them?


A) Beforehand. That is kinda pointless right? How can your customers have any useful feedback until they've actually used the product or service? (ie, "...everything ok? Oh you haven't used it yet? Well great!") You'll most likely have low quality of data but high quantity.


B) Right Away. Well this is ideal. But how do you make a connection with the customer right when that happens? Most firms fail at this, and in their defense, it isn't exactly easy. Using action alerts, you can set up triggers to be notified of a survey response based on keywords. However, this doesn't solve the problem of when to sent the survey invite to the respondent.


C) Afterwords. This is where most firms fall into. Feedback collection happens at a pre-determined interval that the firm can only guess is most optimal (ie, 3 hours after purchase, etc). Again the problem persists: how can I ensure that the feedback I'm getting occurs at that passionate moment when my customer cares the most about articulating quality feedback?


Its all pretty much a crap-shoot. There could be whole swathes of customers absolutely furious with your product or service, but they would have no way of being heard.


Real Time Feedback with IdeaScale

The solution to this problem is real time feedback. Create an IdeaScale community for your customers and let them know you're listening in real time, all the time. When you touch base with your customers, before, during, or after a problem, remind them you're listening by directing them to your community. Let groups form around issues or suggestions, and let those impassioned groups edit and revise the solution. Most importantly, communicate directly with those groups to show them you've solved the problem and turn customers into advocates!

(Since my problem was phone related, I'm happy to see that the Palm Developers have already started collecting ideas with for the Palm Pre.)

More Info:

June 12, 2009

You've Got Good Data - So Share it!

report_sharing.pngSome of you may have noticed that we've revamped our reporting tools over the last couple of weeks. The two biggest improvements have been to the real time summary report and the customized reports.

Sharing

We've now made it easier to share the real time summary report. You'll see a Twitter, email, and Facebook icon at the top of each summary report. Click on any of these links to post directly to your network.


custom_report.png 

Custom Reports

Sometimes the Real Time Summary report just isn't enough though. Its a pretty common use case to remove certain questions in the report that are irrelevant to the overall study. With that, we've gone ahead and revamped the Custom Reports tool to better meet these needs.

When you create a custom report now, you'll be greeted with a familiar wizard interface that will ask you some basic questions about your desired report. You can choose whether to include both  open-ended text and analytical questions, or just one or the other. Once the report is created, you can edit/delete certain questions to further customize the report.

Go ahead and take a look at these new this functionality today - share your data!

June 10, 2009

Dig Up Profitable Market Niches

[This is a guest post from Ivan Taylor from Thirdforce Consulting]

When the economy tightens up, small businesses start thinking about expanding to new markets to find new customers.  But that's a tough road to travel on limited budgets.  Believe it or not, identifying and targeting a tight targeted niche will not only increase your sales, but increase your profit margins because you'll be offering a specific solution "customized" to a narrow niche that's willing to pay for that unique value proposition that's tailored to their needs and applications.  

Now that you're convinced that it's more fun to have customers so you can sell them things, you're wondering how you can even begin to figure out a good niche to serve.  Stop worrying, I'm going to show you how to find a niche you'll love that will love you back.

1.    List general topics around your area of expertise that interest you.  For example, say you're in the software business.  You might list things such as software as a service, customer communities, social media, etc.

2.    Make a list of "audiences" or potential types of customers.  They might be related to your areas of interest - or they might not.  This is a brainstorm, so just make your list.  I'm going to list, small business owners, marketing managers, freelance writers...

3.    For each audience or customer type, list about three to five frustrations they might have.  Small business owners might be frustrated about the high cost of finding high-end marketing consulting for just a project or two, they might also be frustrated about getting feedback from customers.  Freelance writers might be frustrated about not being able to predict their income from month to month.  You can take it from there.

4.    Now play mash-up and mix and match some audiences and their frustration with a general topic on your list and see what new opportunities for offerings come up for you.  For example I came up with a membership site for freelance writers that allows them to connect with small business owners who might need expert writing help.  Not exactly the most creative idea, but I wanted to show you how the process worked.

Once you have your list of potential offerings together for your target audience, you can start pulling together a really fun poll or survey to find out what other opportunities might come up for you.

You can use a standard survey, but I'd recommend using IdeaScale!  Put the widget on your web site and open up the conversation to your web site visitors.  Put some suggestions out there yourself, write a blog post with some instructions and invite your community to start voting ideas up and down.  Don't have an active web or blog community?  No worries, send an e-mail to your customers and invite them to participate in the creation game.

Online surveys used to be the ONLY cost-effective choice to get feedback, but now, IdeaScale is my new favorite.  Not only is it an INTERACTIVE survey, but it's a marketing tool that tells your customers what you're up to and gets them involved in developing their next favorite product or service.

Jump into the IdeaScale pool and start uncovering profitable, fun niche markets today!

Ivana Taylor, works for Third Force Consulting, a strategic marketing firm that uses QuestionPro to collect and analyze customer feedback and input for brand development, customer satisfaction and loyalty.

June 8, 2009

Find Respondents For Your Survey

"It was so simple and fast to hop in and and setup my survey to integrate with my sample provider. When time is of the essence, this makes all the difference." -a QuestionPro user


Survey Completed!

So you got your survey perfectly dialed in on QuestionPro. Everyone has given their two cents, its signed off by your manager, you've decided on the reports that you're going to build. Soon you'll be a data rockstar.


Lacking One Important Thing: Respondents

Oh wait, then you realize that you don't have any people (or not enough) to take the survey! This inevitably leads to a discussion with panel companies about purchasing "sample" or respondents - people willing to take a survey that fit a particular demographic that you're looking for. You'll need to call these panel companies, describe the particular demographic you're looking for, and then finally enable your survey to work with the chosen panel company.


QuestionPro Panel Integration

QuestionPro does not provide sample itself, instead, we've worked with the major panel companies to integrate with their systems - making it as simple as possible to setup. To find sample companies, on the "Send Survey" tab within QuestionPro, just click on the "Find Respondents" to place a sample request. We'll get a quote back to you right away from our various sample providers.

Now, once you've purchased your respondents, all you need to do is enable the panel integration feature for your survey. Simply go to the "Edit Survey" tab, then click, "Finish Options" in the left hand navigation bar. From the drop down, choose your panel provider you've selected from the drop down and hit save.


Welcome SSI

Also, we are always reaching out to new panel providers, so you have the most flexibility when it comes to purchasing sample. With that, we'd like to welcome our latest integration partner, Survey Sampling International (SSI).  SSI offers access to more than 6 million consumer and business-to-business research respondents in 54 countries via Internet, telephone, and mobile. SSI serves more than 1,800 clients worldwide, including three-quarters of the top researchers. Welcome SSI!

logo.png


More Info:

June 5, 2009

Survey: 49% of Companies Doing More In-House Research

"What percentage of research will be conducted in-house in 2009?"


inhouse_research.png

A recent industry survey done by Simon Chadwick sponsored by Cambiar, MRops and Peanut Labs shows that more companies will be doing research in-house this year. To get a copy of the full results of the survey click this link.

June 3, 2009

IdeaScale : Revision History Visualization (6/3)

Over the weekend we've added a couple of quick enhancement to IdeaScale:

Revision History Visualization:

IdeaScale offers not only the ability to vote on ideas, comment on ideas, but also wiki style editing of ideas. The Revision History earlier on simply displayed the older versions of the data - we have now a "Diff" engine - that essentially computes the differences and presents them visually.

The primary objective here is to have an "at-a-glance" overview of how the idea is changing/morphing over time. Here is a screenshot below:

May 27, 2009

The White House soliciting feedback - Powered by IdeaScale

otsp.jpg

"...we are proud to announce an important next step in this
historic call to action - one that will help us achieve a new
foundation for our government - a foundation built on the values of
transparency, accountability and responsibility."
-

Valerie Jarrett, Senior Advisor to President Barack Obama


On January 21st, the President Barak Obama issued the Memorandum on
Transparency and Open Government, stating sweeping changes in the level
of participation and openness in government. We're pleased to announce that we've teamed up with President Obama's Office of Science and Technology of the President and National Academy of Public Administration to enable open feedback powered by IdeaScale. The level of transparency within the Obama Administration is unprecedented, and we're happy to be one of the vendors chosen to help all citizens to participate.


This online brainstorming session will enable the White House to hear your most important ideas relating to open government, including innovative approaches to policy, specific project suggestions, government-wide or agency-specific instructions, and any relevant examples and stories relating to law, policy, technology, culture, or practice.

This session will provide ideas for two more stages of collaboration. Next, a discussion phase will occur were the top rated ideas will be explored further. Finally, a draft phase will be started, where anyone in the public can help edit the language for the final recommendations.

The Stats (as of 5/27):
  • 639 Ideas
  • 1,073 Comments
  • 23,000 Votes
  • 200 requests/sec peak (voting/commenting/viewing)


What we Learned:

The default view on the home page was initially set to top rated ideas. While its fun to see whats top rated, the problem is that new ideas don't have a chance to see the light of day. This was later changed to "recent ideas" mode, as we found that the site was more valuable as we provided a way for new ideas to get some traction (this is just a simple toggle in IdeaScale settings).


Twitter was the main model for expansion. Search for #ogov or opengov.ideascale.com on twitter - you'll see the viral expansion. Twitter integration is bolted right into every IdeaScale community by default - all you have to do is turn it on.

The other thing that has gotten some excitement is our Widgets and API. We think every application out there should support a mash-up - we love them. To that end, IdeaScale supports lots of widget options and an accessible API with more and more methods being added all the time.

Press Coverage:

  • The Seattle Times
    "Survey Analytics' IdeaScale crowdsourcing platform is providing a way for citizens to suggest and discuss ideas for increasing the openness and transparency of the federal government."

    Brier Dudley, Seattle Times


  • Xconomy | Seattle - Business + Technology in the Exponential Economy

    "Using this Web platform, people can share their ideas and recommendations for how to make government more open, as well as vote on others' proposed ideas."

    Greg Huang, Xconomy Seattle




  • "a site where users can make and vote on suggestions of how the government can take further steps in this direction."

    Anthony Ha, Venture Beat






  • "Ideas from the public will be solicited and sifted using IdeaScale, a third-party platform that enable anyone to post an idea, add comments, and vote them up or down. That process will run through May 28. The National Academy of Public Administration, which guided the discussion process on Recovery.gov, will handle the management of this effort."

    Micah Sifry and Nancy Scola, TechPresident



  • NextGov - TECHNOLOGY and the BUSINESS OF GOVERNMENT
    "Online discussions are scheduled to start on June 3 based on what the White House identifies as the most compelling ideas. On June 15, the public will be able to collaborate on more formal recommendations through a wiki, a Web page that allows users to add and edit content."

    Aliya Sternstein, NextGov

  • http://www.ombwatch.org/sites/all/themes/ombwatch/images/masthead.png

    "OMB Watch Applauds Obama Administration's Step Forward on Open Government"

    OMB Watch Press Release
More info:

Got Feedback? Ideas? Suggestions?

Product Blogs




Webinars


Partner Sites: Survey / QuestionPro  |  Email Marketing / ContactPro  |  Panel Management / VerticalPanel   |  Web Polls / Website Polls / MicroPoll  |  Online Petitions / PetitionPro
Links : Brainstorming Tool  |  Survey Programming  |  Free Web Polls  |  MySpace Poll  |  MySpace Polls  |  MySpace Surveys