- 2,771 hits
- Academic Research
- Google Scholar
- Nova SBE
- Online Education
- Academic Research
- Opinião. O amor da direita radical pelos trabalhadores do sector privado publico.pt/2017/11/20/pol… #DesmascararPatranhasPàF|| 7 hours ago
- Links for 11-20-17 - Economist's View economistsview.typepad.com/economistsview… #MarkThomaLink|| 21 hours ago
- #LisboaPortugal twitter.com/CamaraLisboa/s…|| 2 days ago
- Sábado gaiteiro: Abba – Mamma Mia youtube.com/watch?v=unfzfe… tocadojavali.wordpress.com/2017/11/18/sab…|| 2 days ago
- Sábado gaiteiro: EL CÓNDOR PASA HOMENAJE A TODAS LAS MUJERES MILITARES youtube.com/watch?v=ZsK28A… tocadojavali.wordpress.com/2017/11/11/sab…|| 2 days ago
- Links for 11-17-17 - Economist's View economistsview.typepad.com/economistsview… #MarkThomaLink|| 3 days ago
- Links for 11-15-17 - Economist's View economistsview.typepad.com/economistsview… #MarkThomaLink|| 4 days ago
- Pôr do sol às 5 pm instagram.com/p/BbkGIREFS27o…|| 5 days ago
- Links for 11-15-17 - Economist's View economistsview.typepad.com/economistsview… #MarkThomaLink|| 5 days ago
- Links for 11-14-17 - Economist's View economistsview.typepad.com/economistsview… #MarkThomaLink|| 6 days ago
- MS Office 2010
- help and how-to
- Examples (Sky)
- Download Office training
- Protecting your files
- MS Office 2010
Password breaches are happening in massive numbers. In one week last year, 500,000 Yahoo! passwords were exposed. While one exposed password may not cause you much grief, if you use that same password for your bank login and other accounts that protect your financial information, you could be in big trouble. Because companies can’t guarantee the security of your username and password, it is crucial to use a different password for every Internet site and/or service. Given the large number of sites most of us frequent while at work and at home, the only real way to ensure password security (and keep our sanity) is to leverage a password manager.
Password managers are often built as web browser extensions that capture usernames and passwords as they are entered into websites. Some of the better software offers to automatically save inputted credentials and will automatically fill in the saved information for you the next time you visit a site. Password managers use a “master password” to protect your stored credentials, allowing users to create only one strong password that needs to be remembered. As an added bonus, password managers can help to generate strong and random passwords on the fly.
Password managers securely store login credentials in a single encrypted file, often called a vault. Some password managers also let you store passwords or notes in the vault that are not website-related.
All password managers suggest you set a master password. This master password locks and encrypts your password vault, so a strong password is highly recommended. The software may rate the strength of the password you choose, or have built-in tools to assist you with password creation.
If you forget your master password, the software may provide a way to recover it using email or text messaging. As an added security measure, some managers leverage multi-factor authentication, requiring users to provide a second form of identification when attempting to unlock the vault from an unknown device.
Vault data can be stored locally on your computer, on a flash drive, or off-site in the cloud. While storing sensitive data in the cloud may seem risky, when done right – such as by the company LastPass – no one can gain access to your encrypted data without the master password. Some managers also offer a way to store a printed, physical copy of the vault’s contents, for times (such as during a disaster) when the digital solution can’t be depended on. Be sure to store these hard copies off-site.
Choosing a Password Manager
Information Services and Technology (IS&T) does not recommend one tool over another, but does recommend the use of a password manager. Here are two that members of the IS&T IT Security Services Team like:
- LastPass (free, multi-platform and multi-browser support)
- 1Password ($24.99 for single-user license, multi-platform, multi-browser support)
LastPass is cross-platform and has a very robust free tool as well as a premium option for $12 per year. IT Security Services Team members can recommend LastPass based on their positive experience with it, including its ease of use. Mike Halsall, a member of the team, says: “People are awful at creating and remembering strong passwords. I know three passphrases total, but I have 191 passwords. I don’t even know my banking password. My passwords are inputted for me, by the password manager, when I hit the login form of a site.”
1Password has many of the features as LastPass, but encrypts and stores the data locally, rather than in the cloud. (The company gives you the option to sync your vault via Dropbox.) 1Password is considered by some to have a more user-friendly interface, but does lack some of the more advanced functionality of LastPass (such as the option to enable two-factor authentication and the ability to restrict vault logins based on IP location).
If you have concerns about a compromised password, contact the IS&T Help Desk.
As the Reinhart-Rogoff story started up, Peter Frase of Jacobin wrote a critique of liberal wonk bloggers titled “The Perils of Wonkery.” Now that things have calmed down, I’m going to respond. Fair warning: this post will be a bit navel-gazing.
I recommend reading Peter’s post first, but to summarize, it makes two broad claims against liberal wonk bloggers. The first is the critique of the academic against the journalist. This doesn’t engage why wonk blogging has evolved or the role it plays. The second critique is the leftist against the technocratic liberal, which I find doesn’t acknowledge the actual ideological space created in wonk blogging. I find both of Frase’s arguments unpersuasive and also under-theorized. Let’s take them in order.
1. Liberal Wonks in Practice
Frase, a sociologist, locates the peril of wonkery in the fact that it needs to engage with academic research that often is more complicated than the writers have the ability to critically evaluate. “The function of the wonk is to translate the empirical findings of experts for the general public.” As such they are subject to a form of source capture, where they need to rely on the experts they are reporting on, as “they will necessarily have far less expertise than the people whose findings are being conveyed.”
We can generalize this critique as one that academics make of journalists all the time. Journalists don’t understand the subtlety of research and how it often functions as a discourse that changes over time. It’s a conversation on a very long time scale, rather than a race with winners and losers. They want dramatic headlines, conflicts, and cliffhangers, often over whether something is “good” or “bad” or other topics that make academics roll their eyes. Where researchers spend a lifetime on a handful of topics, reporters bounce from topic to topic, oftentimes in the course of a single day, made even worse through the “hamster wheel” of online blogs.
That’s a problem, as far as it goes. But bad journalism is easily countered by…good journalism. Source capture actually strikes me as one of the smaller problems wonk bloggers face. If journalists are worried that they are over-influenced by their source, they can just call another expert — which is what Wonkblog did for the Reinhart/Rogoff studies. Wonk bloggers tend to focus on a group of related areas, and like any other journalist, they develop a list of the top researchers in any area to navigate complicated issues. They call people and ask questions.
It is true that in the wonk space, judgments on where the wonk’s self-declared expertise ends and where the line should be drawn on what is covered explicitly lie with the authors themselves. But this just makes explicit what is hidden in all of journalism, which is the problem of where to draw these lines.
It’s true that these debates take place within the context of existing policy research. A friend noted that Frase’s piece rests on a weird contradiction: it’s about how wonks don’t have enough expertise, but also how expertise is just a way of power and capital exerting itself and should be resisted. But that assumes that wonk blogging is just a replication of ruling ideology.
1.a What Creates Wonks?
We’ll talk about ideology more in a minute, but it’s surprising that Frase doesn’t even try to ground his analysis in the material base of institutions that create and fashion liberal writers. Frase seems to imply that the peril derives from personality-driven ladder-climbing, or to bask in the reflected glory of Serious People; he’s a step away from saying what wonks do is all about getting invited to cocktail parties.
But let’s try to provide that context for him. Why has “wonk” analysis risen in status within the “liberal” parts of the blogosphere, and what does that tell us about our current moment?
Contrasted with their counterparts on the right, young liberal writers come up through journalistic enterprises. That’s where they build their expertise, their approaches, their sensibilities, and their dispositions, even if they go on to other forms of opinion writing. Internships at The Nation, The American Prospect, or The New Republic are a common touchstone, with the Huffington Post, TPM, and Think Progress recently joining them. Though this work has an ideological basis, the work is journalism. Pride, at the end of the day, comes from breaking stories, working sources, building narratives, and giving a clear understanding of the scale and the scope of relevant actions. And part of that reporter fashioning will involve including all sides, and acting like more of a referee than an activist.
Where do young conservatives come from? They are built up as pundits, ideological writers, or as “analysts” or “experts” at conservative think-tanks. These conservatives then go out and populate the broader conservative infrastructure. As Helen Rittlemeyer notes, one reason conservative publications are declining in quality is because they are being filled with those who work at conservative think tanks (and are thus subsidized by the tax code and conservative movement money).
This is an important distinction when you see the numerous criticisms asking for wonky liberals to get more ideological. Bhaskar Sunkara argues that liberal wonks have a kind of “rigid simplicity” that is incapable of even understanding, much less challenging, the conservative ideology it is meant to counter. Conor Williams makes a similar argument, arguing that the “wonks’ focus on policy details blinds them to political realities.” In a fascinating essay comparing wonks to conspiracy theorists like Alex Jones, Jesse Elias Spafford writes in The New Inquiry that wonks “have risen to prominence because they come wrapped in the respectable neutrality of the scientist and have eschewed the partisan bias of the demagogue” and that, instead of agreed-upon facts, “our political discussions need to grapple with ideology and psychology, and with the underlying tendencies that draw people to particular ideologies.”
But just as there are numerous pleas for liberal writers to get more ideological, there are pleas on the right for more actual journalism. The post-election version of this was from Michael Calderone at Huffington Post, ”Conservative Media Struggles For Credibility.” The hook was that everyone was excited because there was finally one genuinely good conservative congressional reporter in Robert Costa. Previous versions include Tucker Carlson getting boos at CPAC for saying, “The New York Times is a liberal newspaper. They go out, and they get the facts. Conservatives need to copy that.” Connor Friedersdorf issued a similar call back in 2008: “[a] political movement cannot survive on commentary and analysis alone! Were there only as talented a cadre of young right-leaning reporters dedicated to the journalistic project…the right must conclude that we’re better off joining the journalistic project than trying to discredit it.”
Meanwhile, the attempts by actual reporters (Tucker Carlson, Matthew Continetti) to build journalistic enterprises on the right (Daily Caller, Free Beacon) have collapsed into hackish parodies. The funders are wising up; the Koch Brothers are looking to just purchase newspapers wholesale rather than trying to build them out organically through the movement.
1.b Why Liberal Wonks?
Frase also makes no attempt to understand why wonk blogging has risen right now. And even a cursory glance at the historical moment makes it clear why wonk blogging has become important. From 2009-2010, several major pieces of legislation quickly came up for debate on core economic concerns: the ARRA stimulus and more general macroeconomic stabilization, health care reform, financial reform, immigration reform, unionization law, and carbon pricing.
Some passed, some didn’t. But all of these were complicated, evolved rapidly, and needed to be explained at a quick pace. Conventional journalism wasn’t up to the task, and wonks stepped up. As these reforms unfolded, often shifting week by week, there were important battles over how to understand the individual parts. There’s a passage from Alan Brinkley about businessmen asking, in 1940, if the “basic principle of the New Deal were economically sound?” Wonks had to answer the specific questions – is the public option important? – but also explain what parts were sound and why.
So I disagree with Spafford, who writes, “The startling rise of the wonk to political prominence has been buoyed in large part by the hope that the scientific objectivity of the technocrat might finally resolve political disagreement.“ The wonk rises more with the wave of liberal legislation of the 111th United States Congress, rather than the waves of centrist deficit reduction or conservative counter-mobilization.
It’s true that the right is more ideologically coherent and part of a “movement.” But it’s not clear to me that this is working well for them right now, or that liberals would be right to try a strategy of replication. Especially as I contest that wonk blogging doesn’t have an ideological edge.
2. Liberal Wonkery as Ideology
As an aside, here’s Arthur Delaney’s first wonk chart:
In Frase’s mind, wonkblogging is a “way of policing ideological boundaries and maintaining the illusion that the ruling ideology is merely bi-partisan common sense.” Wonk bloggers merely reproduce technocracy, performing the Very Serious Analysis that always comes back to a set of narrow concerns that coincide with ruling interests.
But is the background ideology of liberal bloggers a “ruling ideology” committed to the status quo? I don’t buy it. First off, just the act of writing about problems and potential policy solutions casts them as problems in need of a solution. Indeed, as many on the right have noted, a crucial feature of wonk blogging isn’t the creation of “solutions” to policy problem but the creation of “problems” in the first place.
Think of some of the things liberal wonk bloggers (at least in the economics space) focus on: unemployment; lack of access to quality, affordable health care; wages decoupled from productivity. These aren’t just put out there as crappy things that are happening. Wonks don’t focus on how there’s nothing good on television, or rain on your wedding day. And the problems they signal aren’t, usually, thought of as personal failings or requiring private, civic solutions. They are problems that the public needs a response for.
What does that amount to? If you link them together, they tell a story about how unemployment is a vicious problem we can counteract, that the shocks we face in life should be insured against, that markets fail or need to be revealed as constructed. And they don’t argue “just deserts” — that some should be left behind, or that hierarchy and inequality are virtues in and of themselves — and instead produce analyses in support of economic and social equality. Everyone should have access to a job, or health care, or a secure retirement.
In other words, they describe the core project of modern American liberalism. Keynesian economics, social insurance, the regulatory state and political equality: wonk blogging builds all of this brick by brick from the bottom-up. Signaling where reform needs to go is increasingly being viewed as the important role pundits and analysts carry out. And rather than derive them from ideology top-down, they’re built bottom-up as a series of problems to be solved.
Wonkiness-as-ideology has its downsides, of course. In line with Frase’s critique, wonky analysis makes virtues uncritically out of economic concepts like “choice” and “markets,” while having no language for “decommodification” or “workplace democracy.” They reflect the economic language of a neoliberal age. (Though if you are Ira Katznelson, you’d argue that this wonky, technocratic, public policy focus of liberalism was baked into the cake in the late 1940s.) There’s an element of liberalism that is focused on “how do we share the fruits of our economic prosperity” that hits a wall in an age of stagnation and austerity.
But I wouldn’t trade it for what the left seems to be offering. Indeed one of the better achievements of mid-century democratic socialism, Michael Harrington’s The Other America, was proto-wonk blogging. He identified problems. He consciously didn’t mention ideology, knowing full well that stating the problem in the context of actually existing solutions would create the real politics. And if he had access to modern computing, Harrington certainly would have put a lot of charts in his book and posted them online.
Google is still embroiled in an EU investigation into alleged abuse of its dominance in search
Google may face a new antitrust investigation by the European Union after a group of competitors complained it was trying to corner the mobile market.
FairSearch, which represents Microsoft (MSFT, Fortune 500), Oracle (ORCL, Fortune 500), Nokia (NOK)and a number of other search engine operators, accused Google (GOOG, Fortune 500) of using its Android operating system to “monopolize the mobile marketplace and control consumer data”.
The group wanted the European Commission to act quickly to prevent Google from repeating “its desktop abuses” in mobile, lawyer Thomas Vinje said in a statement.
The U.S. government concluded a two-year investigation into Google earlier this year with a ruling that the search engine company did not breach U.S. antitrust laws.
But a similar investigation by EU regulators, launched in November 2010, remains open, and preliminary findings released last year found the company was violating European law in four ways.
The European Commission confirmed that it had received the FairSearch complaint about Android, but would not comment further.
A spokesman for European Competition Commissioner Joaquin Almunia also declined to comment on the status of the existing investigation and about whether Google was close to agreeing remedies to address the EU’s concerns.
Google said it would continue to cooperate with the European Commission, but would not comment on the details of the FairSearch allegations.
FairSearch cited industry data showing 70% of smartphones shipped at the end of 2012 were running Android, and Google had 96% of the market in mobile search advertising.
It accused the company of requiring Android smartphone makers who want to offer apps such as Google Maps or YouTube to pre-load Google mobile services and to give them a prominent display by default on their devices.
“This disadvantages other providers, and puts Google’s Android in control of consumer data on a majority of smartphones shipped today,” FairSearch said in its statement.
Earlier this month, six European states — including the U.K, Germany and France — said they would take action against Google after the company failed to respond to EU concerns about privacy of user data.
With academic journals under increasing attack from several quarters, Mr Zicklin has upset some colleagues in urging schools to cut tuition fees by making faculty members focus more on teaching and less on publishing research in journals.
The spread of mobile computers, in numbers.
- By Benedict Evans on March 15, 2013
Smartphones have created a bridge between two previously separate industries—wireless networks and personal computing. For Internet firms and device makers, this means access to the world’s largest network of people. As can be seen above, the wireless telephone business is large compared to personal computing. In 2012, the world’s mobile operators did $1.2 trillion in business and served around 3.2 billion people, versus perhaps 1.7 billion people who used PCs to access the Internet. By comparison, the combined revenue of Microsoft, Google, Intel, Apple, and the entire global PC industry was $590 billion. Online advertising, the main driver of the consumer Internet, generated only $89 billion in revenue.
PCs still represent a majority of personal computing devices in use globally. But not for long. As smartphone and tablet sales increase rapidly, they are replacing PCs and Microsoft Windows as the dominant personal-computing paradigm. At right are the number of PCs, tablets, smartphones, and all mobile phone handsets in use, as well as the number of each sold in 2012. Growth in smartphone sales are coming largely at the expense of older-style “feature phones,” as people replace them, typically once every two years. As can be seen, two-thirds of the mobile phone market has yet to convert to smartphones. Close to a billion smartphones will be sold in 2013, while PC sales will gradually decline.
Smartphones have greatly increased the profitability of the mobile phone handset business. The average selling price of all mobile phones rose from about $105 in 2010 to $180 at the end of 2012, largely driven by Apple’s iPhone. In 2012, Apple sold 136 million iPhones for $85 billion, averaging $629 per phone. By comparison, the average selling price of a PC is about $700. With a further $33 billion in revenue from iPads, Apple’s annual revenue now exceeds the combined business of Intel and Microsoft. Sales by other companies of Android smartphones (not shown) reached 480 million units in 2012, generating an estimated $120 billion in revenue at an average selling price of $250.
Illustration: Christine Daniloff/MIT
March 12, 2013
For many companies, moving their web-application servers to the cloud is an attractive option, since cloud-computing services can offer economies of scale, extensive technical support and easy accommodation of demand fluctuations.
But for applications that depend heavily on database queries, cloud hosting can pose as many problems as it solves. Cloud services often partition their servers into “virtual machines,” each of which gets so many operations per second on a server’s central processing unit, so much space in memory, and the like. That makes cloud servers easier to manage, but for database-intensive applications, it can result in the allocation of about 20 times as much hardware as should be necessary. And the cost of that overprovisioning gets passed on to customers.
MIT researchers are developing a new system called DBSeer that should help solve this problem and others, such as the pricing of cloud services and the diagnosis of application slowdowns. At the recent Biennial Conference on Innovative Data Systems Research, the researchers laid out their vision for DBSeer. And in June, at the annual meeting of the Association for Computing Machinery’s Special Interest Group on Management of Data (SIGMOD), they will unveil the algorithms at the heart of DBSeer, which use machine-learning techniques to build accurate models of performance and resource demands of database-driven applications.
DBSeer’s advantages aren’t restricted to cloud computing, either. Teradata, a major database company, has already assigned several of its engineers the task of importing the MIT researchers’ new algorithm — which has been released under an open-source license — into its own software.
Barzan Mozafari, a postdoc in the lab of professor of electrical engineering and computer science Samuel Madden and lead author on both new papers, explains that, with virtual machines, server resources must be allocated according to an application’s peak demand. “You’re not going to hit your peak load all the time,” Mozafari says. “So that means that these resources are going to be underutilized most of the time.”
Moreover, Mozafari says, the provisioning for peak demand is largely guesswork. “It’s very counterintuitive,” Mozafari says, “but you might take on certain types of extra load that might help your overall performance.” Increased demand means that a database server will store more of its frequently used data in its high-speed memory, which can help it process requests more quickly.
On the other hand, a slight increase in demand could cause the system to slow down precipitously — if, for instance, too many requests require modification of the same pieces of data, which need to be updated on multiple servers. “It’s extremely nonlinear,” Mozafari says.
Mozafari, Madden, postdoc Alekh Jindal, and Carlo Curino, a former member of Madden’s group who’s now at Microsoft, use two different techniques in the SIGMOD paper to predict how a database-driven application will respond to increased load. Mozafari describes the first as a “black box” approach: DBSeer simply monitors fluctuations in both the number and type of user requests and system performance and uses machine-learning techniques to correlate the two. This approach is good at predicting the consequences of fluctuations that don’t fall too far outside the range of the training data.
Often, however, database managers — or prospective cloud-computing customers — will be interested in the consequences of a fourfold, tenfold, or even hundredfold increase in demand. For those types of predictions, Mozafari explains, DBSeer uses a “gray box” model, which takes into account the idiosyncrasies of particular database systems.
For instance, Mozafari explains, updating data stored on a hard drive is time-consuming, so most database servers will try to postpone that operation as long as they can, instead storing data modifications in the much faster — but volatile — main memory. At some point, however, the server has to commit its pending modifications to disk, and the criteria for making that decision can vary from one database system to another.
The version of DBSeer presented at SIGMOD includes a gray-box model of MySQL, one of the most widely used database systems. The researchers are currently building a new model for another popular system, PostgreSQL. Although adapting the model isn’t a negligible undertaking, models tailored to just a handful of systems would cover the large majority of database-driven Web applications.
The researchers tested their prediction algorithm against both a set of benchmark data, called TPC-C, that’s commonly used in database research and against real-world data on modifications to the Wikipedia database. On average, the model was about 80 percent accurate in predicting CPU use and 99 percent accurate in predicting the bandwidth consumed by disk operations.
“We’re really fascinated and thrilled that someone is doing this work,” says Doug Brown, a database software architect at Teradata. “We’ve already taken the code and are prototyping right now.” Initially, Brown says, Teradata will use the MIT researchers’ prediction algorithm to determine customers’ resource requirements. “The really big question for our customers is, ‘How are we going to scale?’” Brown says.
Brown hopes, however, that the algorithm will ultimately help allocate server resources on the fly, as database requests come in. If servers can assess the demands imposed by individual requests and budget accordingly, they can ensure that transaction times stay within the bounds set by customers’ service agreements. For instance, “if you have two big, big resource consumers, you can calculate ahead of time that we’re only going to run two of these in parallel,” Brown says. “There’s all kinds of games you can play in workload management.”