Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This portion of the incubator is focused on 3D and immersive technologies, with a focus on 3D object creation using photogrammetry (photographs) and laser scanning (lasers) as well as the creation of basic alternate-reality (AR) scenes with Apple's RealityComposer platform.
In the Digital Studio, we have access to equipment for both 3D modeling and laser scanning. The aim of this hands-on experiment is to explore and try to create 3D models, perhaps for use in the Alternate Reality (AR) scene to be created in the next part of the incubator.
Photogrammetry setup with Lightbox and basic DSLR camera, good for objects down to about the size of your fist
Hand-held Laser Scanner (borrowed from Schiller), good for quick scanning of larger models (and people!)
Hand-held laser Scanner with roundtable setup, good for objects down to 3cm
Content will be added to this section over the incubator week.
Having gained familiarity with the basic functionality of RealityComposer, challenge yourself to create a simple AR scene with behaviors using prebuilt models or some imported from Sketchfab. Some possible scenarios with the basic models:
A pool table/bowling lane where the ball flies into the other balls when tapped
"kicking" a football through the uprights
A rocket ship flies into space with sounds (and then slowly comes back to its landing spot)
Whatever else you want!
Reality Composer (RC) for iOS, iPadOS, and macOS makes it easy to build, test, tune, and simulate AR experiences for iPhone or iPad. With live linking, you can rapidly move between Mac and iPhone or Mac and iPad to create stunning AR experiences, then export them to AR Quick Look or integrate them into apps with Xcode. This tutorial will introduce the basic functionality of the tool and review the interface. Please note that screenshots for this introduction were created using Reality Composer on a MacBook. The screen layout is slightly different when working directly on an iPad, but the steps are the same.
Reality Composer is free to download from the app store for iPad or iphone, or downloaded as part of the free XCode for Mac iOS (Apple Account Required). A warning that XCode is quite a larger program for developing applications, so may not be feasible for older MacBooks or those without much storage space. Running Reality Composer on a MacBook also requires you to send your project to an appropriate iPad/iPhone for testing (see below) or open it in an emulator using xCode.One important fact to note is that Reality Composer projects (i.e. what opens in Reality Composer) are saved as .rcprojects while sharable experiences that open directly onto an iPhone or iPad with the Quick Look tool are .reality files.
When you first open up a new Reality Composer project, your first decision to make is what kind of anchor you want for the project. The Anchor type will determine what requirements are needed to open your scene in AR. When you start up RC, you are asked to select an Anchor typer for your project. There are 5 relatively-straightforward anchor types to chose from in Reality Composer:
Horizontal - will look for any horizontal surface (e.g. a table or the floor) to open your experience on
Vertical - will look for any vertical surface (e.g. a wall) to open your experience
Image - will look for a specific defined image that you determine (e.g. a painting or business card) to open your experience around
Face - will look for a person's face (as recognized by Apple's camera) to open your experience around
Object - will look for a previously scanned physical object (see details at the end of this workshop) to open your experience around
Once you have chosen an anchor, the main window of Reality Composer will open up, which changes slightly based on the anchor you have chosen (above, the horizontal anchor). You can also change your anchor if needed in the right window Properties pane (as well as change the overall physics of the scene if desired, a la allow things to fall through the floor. This can also be set object-by-object later).
The window also opens with two objects already present, a cube and a little placard showing some text. Before checking those out, let's look at the other options on the main toolbar. The most important ones for us will be the Add button, the Behaviors button, and the Properties button, but we can quickly review them all.
The Scenes option opens up a sidebar that allows you to name the "scene," or AR view, you are currently working in, as well as create new scenes that can be linked by certain actions (see below).
The Frame button just lets you navigate more easily around your scene by allowing you to zoom in on certain objects or on the scene as a whole.
The Snap button allows you to snap objects to other objects or to certain horizontal or vertical lines within the scene (similar to the Photoshop snap tool)
The Modify tool allows you to swap between adjusting an objects position/rotation and adjusting its length/width/size (this can also be done in the properties pane, as we will see)
The Space button swaps between "global" and "object-oriented" coordinates, allowing you to move an object along various axes within your scene
The Add button adds more models to your scene, of lots of different types!
The Play button (also AR button when working on an iPad/iPhone) allows you to test our your scene on a device. The Send to button (Mac only) allows you to send your scene directly to a linked iPad for testing
The Info button checks for updates to your models/scenes
The Behavior button allows you to assign different behaviors to your object based on specific prompts (e..g. the object it tapped on, it flies in the air)
The Properties button allows you to edit the properties of a specific model or of the scene as a whole.
Let's play with some objects, starting with what comes in our default scene.
Clicking on the cube will automatically open the properties pane, which allows you to directly edit its various properties like width, height, color, etc. You can also name the object, transform it in space (if you want to set something exactly) as well as decide if it will obey the laws of physics (e.g. fall to the surface if it appears up in the air). You can also edit directly in the viewing window once an object is selected; Arrows pointing along axes allow you to move an object, which clicking the tip of an arrow will allow you to rotate an object around that axis. Give it a try!
Clicking on the placard will yield a similar pane, though this one also allows you to edit the text appearing on the sign. Each object you add will have its own individual properties that you can edit here. For now, edit these two items however you wish!
If you want to add a new object, just click the + Add button. There are many pre-installed 3D objects to work from, as well as signs that can hold text and "Frames" that can hold attached Images. You can also introduce your own objects.
For now, I will drop a plane in our scene, by clicking and dragging it in or by double-clicking. I then rotated it around to be pointing away from us.
Let's do one last thing before trying out our model in AR....adding a behavior! I'm going to select the plane, and then click the behaviors button on the toolbar, and a new pane will open up along the bottom of the page
Clicking the + button will open the new behavior pane, where there are several pre-set behaviors you can choose from, or if you scroll down to the bottom you can create a custom behavior. For now, I will choose Tap and Add Force so we can give our plane some energy
Now, our behavior pane has its first behavior! You can rename the behavior (currently called Behavior) in the left pane; the Trigger pane allows you to determine what action will make the behavior start (in this case, tapping) as well as which affected objects have the trigger attached (in this case, the plane, but you could add more!)
Finally, the action sequence allows you to create a series of effects that happen, either simultaneously or one after another, when the trigger is...triggered. In this case, we are going to have the plan start moving forward at a certain velocity.
So we can make our plan move forward, but that's not really taking off. Adding a second force in an upward trajectory after a moment will make this takeoff look a bit more realistic. To add further actions to the sequence, simply tap the + sign next to the words Action Sequence in the Behaviours pane. This will then pop up a bunch of different pre-coded behaviors you can choose from.
In the image above, I added two new behaviors, a Wait Behavior and a second Add Force behavior in a 45 degree upward angle. Importantly, I direclty attached the Wait behavior to the first Add Force behavior (just a drag and drop) which means that these two actions will begin simultaneously, and the 2nd Add Force will not start until the first set is complete. This means our plane will move forward a certain amount before briefly "taking off".
Now that we have an experience, we need to test it out to see how it is functioning. There are a few ways to do this.
If you are working on an iPad or iPhone, its easy! Just hit the AR button on the top toolbar to open the experience in AR, and then the Play button to start the experience from the beginning.
If you are working on a Mac, it's a bit more difficult. On the one hand, if you hit the Play button on the top toolbar, the experience will start, but will obviously not be in AR, making testing a little bit difficult (though you could certainly test the functionality, as in the images below where our plane has flown far off into the distance.
There are other options for testing from a Mac, however. If you have an iPhone or iPad handy that has Reality Composer installed, you could connect it via a USB/USB-C cord to your computer. If you then open Reality Composer on both devices and hit the Send To button on your Mac, the file will open in Reality Composer and be editable/testable on your iPad!
Note that if you are using special imported models, they may not be available on your second device, unless they have been imported there as well.
Another option is to simply export the file and share it as a fully .reality file. To do this, go to File --> Export, and pick either to export the entire project or just the current scene. After saving it on your computer, you can navigate to that folder, select the .reality file, again go to File --> Share in the Finder menu, and choose how you want to share the file (text, AirDrop, etc) to your iPad or iPhone. Opening it on your iPhone or iPad in this way does not require Reality Composer, as it is using the built in Apple Quick Look platform (you can also share your experience with other people in this way!
We are going to add one last action that will make our action "replayable": returning the plan to its original location after a certain amount of time. Otherwise, we tap the plane once and it is gone!
This can be done by adding one more action to our action sequence, a Move, Rotate, Scale To action that will move our object back home. Adding this action to the end of our action list, and then selecting the Plane as our affected object will allow you to choose where you want the plane to return to. In this case, I will adjust it so it ends up back where it started (by moving the plane in the image back to the left to the starting place). Also noticed I added one more Wait action so that the plane will wait one second after it stops being impacted by the Force to return home.
And that's it! Now the plane will return to its original location (and might even let you see the ingrained physics on the way as the plane hits the block as it returns). Project testing videos and the files created with this tutorial can be found below. There is obviously a ton of other behaviors and models to play with, so give it a try!
As a bonus, you could use the Hide and Show behaviors to make the plane seem to magically "appear" back in its home location at a certain moment. See if you can make it work!
A DS Project can roughly be divided into three sections focusing on the project's data, the presentation of that data, and the contextualization of that data
Some Initial questions to consider when developing a DS project
Data is a very general term but includes all the information you have gathered to support the defined goal of your project. Data may be qualitative or textual (e.g. a transcription of a historical document), quantitative or numerical (e.g. a breakdown of racial demographics in Boston between 1940 and 1960), audio/visual (e.g. images, audio recordings, video), spatial (e.g. the locations of excavated ancient Greek theaters), or 3D (e.g. a 3D model of a rum keg from 1776). Data may have accompanying metadata (data about your data) describing where the data comes from and may contain other useful attributes for searching and filtering your dataset.
What kind(s) of data might my project contain?
Am I creating a dataset myself or using data that is already accessible? If creating it myself, what work might need to be accomplished to create and organize my dataset?
What ways might I want the user to engage with the raw data (in terms of searching/filtering/downloading, rather than visualization)?
Do I need permissions to use any of the data?
Over the course of the next few days, we will be talking about a variety of ways data can be presented. Data presentation can be undertaken in many ways through DS tools, ranging from a tradititional document/image viewer to an interactive map to a data dashboard created through tools like Tableau (and many other ways). The key goal of data presentation is to make your data accessable and comprehensible to the user in order to support the goals and narrative of your project.
A few initial data presentation questions that may be useful as you begin to develop a digital scholarship project:
How can data visualizations help support the goals of my project?
How can data visualizations help users understand my dataset better?
What digital scholarship approaches or tools might be useful for helping me analyze and visualize my data (textual analysis, mapping, charts/graphs, exhibitions, etc)?
Are opensource tools for accomplishing my goals already available, or will I need funding for custom development?
Sometimes overlooked, data contextualization can be just as important as other aspects of a digital scholarship project. Data contextualization includes all the necessary information a person needs to know about how your data was gathered, what issues and biases it may have, as well as any useful historical or cultural information. It may also describe how your project was constructed on a technical level. Finally, it may also include tradititional scholarly information in the form of citations, bibliographies or further readings.
Some initial questions around data contextualization:
What innate biases might my data have?
What does my audience need to know in order to understand my data properly?
How can my audience learn more about the subject discussed?
What is the goal of my project?
Who is the prospective audience for my project?
What areas of DS will be most useful for my project to use to accomplish my goals?
Do I have or need a project team to accomplish my goals?
What specific technologies or expertise will I need for my project to be considered a success? Are there members of my project team with those skills, or do I need to develop them.
Do I have any copyright concerns?
What is the timeline for my project? Are there benchmarks I could put in place to ensure the project is moving along at an appropriate pace?
Do I have or need funding for my project?
Where will my project be hosted/what kind of support am I looking for (institutional vs personal)?
Every project has the things it does well and things that could be explored further. Here are a few examples of DS projects currently out in the world
The DH Awards site contains a great number of excellent DH projects to explore; some of the below are pulled from that resource. There are also those from the BC DS site and many DH group sites from universities across the country openly available to explore.
New York Restaurant Keepers (part of NY Immigrant City project out of NYU)
Digital Harrisburg (Harrisburg University / Messiah University)
Atlas of Early Printing (University of Iowa)
Bombing Missions of the Vietnam War (ESRI)
Plotting English-Language Novels in Wales
Battle of Hong Kong 1941: A Spatial History
Mirror of Race (Boston College)
Byzantine Textiles (Dumbarton Oaks)
WHEN MELODIES GATHER: ORAL ART OF THE MAHRA (audio exhibition)
The Resemblage Project: Stories of Aging
Tudor Networks (Stanford and Queen Mary University of London, among others)
Coins: A Journey (Münzkabinett Berlin)
Historiography of the American Revolution (timeline)
The Dream of the Rood (EVT Viewer)
Digital Gabii Vol 2 (Michigan)
Women in Science 3D Lab (Byrn Mawr)
The effectiveness of a DS project can be evaluated in a number of ways, including clarity, ease of use, and effectiveness in accomplishing its goals.
Digital scholarship is a constantly evolving and expanding world, meaning new projects are contantly being developed. While at first this may make it a bit confusing, it does have the advantage that projects with similar goals to yours (either in presentation or in dataset) most likely exist, offering a place to start from in terms of project conception and inspiration.
Performing this kind of "environmental scan" (similar to creating an inital research bibliography!) can often prove fruitful. This makes the ability to quickly evaluate a DS project beneficial, offering quick suggestions for tools or techniques you might explore and utilize or avoid entirely.
Attached is short document we use when quickly evaluating a DS project. It is broken down into several sections useful to highlight and consider as you move forward in your own project. Additionally, several links are provided focusing on the evaluation of DH projects, particually as DH is becoming more accepted within disciplines as a means for promotion and tenure.
Exercise visualization examples
Critique a Visualization Project (https://public.tableau.com/app/discover)
Does this project have a clearly defined message or research question? What is it?
What is being visualized?
What are some of the applied tools or methods?
What is the underlying software or platform?
Does the visualization add value or impact and how so?
What kind of data are they using? What sources is it drawn from?
Identify any usability issues and what improvements could be suggested.
Who are the collaborators, project team members, or other partners on this project?
Is there anything else worth pointing out not covered above?
Download the following 2 datasets to your computer:
Hands on exercise:
For this exercise, you will work with the 2019 Massachusetts crime data by city. The spreadsheet includes one table and some extra formatting.
1. Connect the excel file to Tableau.
2. Drag over the “Crime 2019” table to the area that says “Drag sheets here”. Look at your data table. Does it look right to you? Consider the table format, header, footer, and variable type.
3. Use the data interpreter function in Tableau, review Data Interpreter results in the preview area.
4. Check out the data table. How does it look now? Consider the table format, header, footer, and variable type.
5. Change the variable names to better reflect what they are. E.g., crime total 1 to Violent crime, crime total 2 to Property crime. (provide a screenshot)
6. What potential data visualization questions can you ask from this data set?
Hands on:
We will be working with sample networked data sets based on the Oxford Dictionary of National Biography and the Six Degrees of Francis Bacon project. These data sets include a list of names and relationships for early seventeenth-century Quakers.
Download quakers_nodelist and quakers_edgelist CSV files to your Desktop
This is a guide to installing and running Tableau Desktop on your personal computer. Please note that all workstations in the Digital Studio (on the second floor of O'Neill Library) already have Tableau Desktop installed.
Tableau has versions for both Windows and Mac. Detailed system requirements for Tableau here: https://public.tableau.com/en-us/s/download.
Tableau Desktop is a visualization software used to create data visualizations and interactive dashboards. If you are a student, instructor, or researcher, you can request a free, renewable, one-year license for Tableau Desktop through Tableau Academic Program. For instructor and researcher, the individual license is valid for one year and can be renewed each year if you are teaching Tableau in the classroom or conducting non-commercial academic research; The student license expires after one year; you can request a new license each year as a full-time student.
If you are a member of the public, please consider using Tableau Public instead, which is the free version of Tableau Desktop.
Here are the steps for students: (Installation process for instructors and researchers is similar. Just follow the instructions on the screen.)
Step 1: Go to https://www.tableau.com/academic/students (Here is the link for instructors.)
Tableau Student
Step 2: Click on Get Tableau for Free.
Step 3: A web form will pop up. Complete all of the requested information, using your official BC email address when you fill out the form.
Step 4: Next, click on Verify Student Status.
Step 5: You will receive an email with a product key and link to download the software.
Step 6: Click on Download Tableau Desktop from your email and copy the product key.
Step 7: Follow the installation instructions to install Tableau to your computer.
Step 8: Activate your Tableau with your license key.
For instructor and researcher, click on Request Individual License on the screen.
The pop up request form is similar to the student one described above, but additionally asks "I plan to use Tableau Desktop for..." Under that popup, you can select "Teaching only," "Noncommercial academic research only," or both. Select the option that fits your needs best. You do not need to be an instructor to get a Tableau copy.
Tableau Public
Following are the general steps to download Tableau Public:
Go to Tableau Public Download Page: public.tableau.com
Enter your email address and click "Download the App".
Once the installation file has been downloaded to your computer, run it and follow the prompts to install Tableau on your Mac or PC.
In groups, explore these exhibits. Take a quick look at all of them and then choose two to focus on and discuss the questions below.
, Platform:
, Platform:
, Platform: A developer customized site
, Platform:
, Platform: Wordpress (. and .)
, Platform: Wordpress (. and .)
What is your general impression of the exhibit? (How inviting and effective is it?)
How central are the exhibit objects to the story being told?
How does the visual design of the site affect the tone/feel?
What do you think about the flow of the exhibit and the navigability of the site?
Additional Info
Items brought into Exhibit.so require IIIF manifest URLs. IIIF is a standard that enables rich functionality across all web browsers for different kinds of digital objects, e.g., zooming into an image. Learn more and find IIIF resources.
1.) Open Exhibit.so and the object spreadsheet
2.) In Exhibit.so, scroll down and click on the Create an Exhibit button.
3.) Select and Add Information:
Select Scroll template
Add Title: DS Incubator (or whatever you want)
Add Description: DS Incubator Exercise (or whatever you want)
Select Public
Click Create Exhibit
When you get to the exhibit page, click Share (bottom left) and copy and paste the URL into a location you can save it to be able to access and edit the exhibit later.
4.) Adding Items
A single image of an Anti-Slavery Almanac:
The full text of an Anti-Slavery Almanac:
1.) Saving & sharing your exhibit
2.) Adding a simple object
Click the Add Item button (bottom left) and past a IIIF manifest URL into the IIIF manifest URL box and click Import.
Click on the item and click on Add to Exhibit
Click on the plus sign...
and copy and paste the following "dummy text" into the text box:
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo.
Click on the checkmark (see below).
Click on the plus sign again and add some dummy text to the text box.
Hover on the image and you see a plus, minus, and rotation symbol appear. Click the plus sign to zoom in and click and drag the image to frame the illustration well. Click the checkmark.
3.) Adding a compound object
Add the following object the same way you did the object above: Click Add Item, paste the IIIF manifest URL (available below), click on new Item Import, and Add to Exhibit
Once the item is added, navigate to page 13 of the Almanac (image 18 of the scans)
Like you did before, click the plus sign, add some text, and click the checkmark
This portion of the incubator is focused on an introduction to spatial data (vector and raster), with workshops on creating spatial data and finding and georeferencing historical maps. Finally, the different datasets are combined in an ArcGIS online map.
Bonus: QGIS Workshop (in progress)
1.) Explore the project Digital Dante: Original Research & Ideas and consider:
Purpose: How clear the intention/purpose of the project is made
Taxonomy: The effectiveness and clarity of labels and headers
Navigation: How easy it is to move around the site and get to specific info/sections
Wayfinding: How well can you find your way back to information you previously saw
Visual design: How the colors, fonts, visuals, layout, interface, etc. affects your ability to use the site, engage with the content, and make you feel
Accessibility: Are there any potential accessibility issues
2.) Draw a rough diagram of the information architecture. (You can keep it simple and focus on the higher end of the hierarchy (e.g., the first three tiers).)
An opportunity to explore one of the spatial databases discussed in the presentation to look more closely at how their data is organized and to create your own simple spatial dataset.
Create a few spatial points of interest using the format we saw above ObjectID, X (long), Y(lat), or any other attributes you want using GoogleSheets/Excel/OpenOffice; or explore one of the databases above and practice downloading and opening the datasets to get comfortable dealing with vector datasets.
Remember to save your file as a .csv when you are done!
Some Boston Spatial Databases:
Possibly useful datasets for exploring questions about segregation and redlining.
Hospitals (download CSV)
Neighborhood boundaries (download Shapefile)
Boston Social Vulnerability (download Shapefile) [play with attribute and drawing style options]
Public Schools (download CSV)
Non-Public Schools (download CSV)
Want to check your csv data real quick? Go to Google My Maps, create a New Map, and click "Import" to import your CSV data to make sure it's looking good. Otherwise, we will be checking it later when we get to ArcGIS online!
Spatial data adds a geographic dimension to a qualitative and/or quantitative data set, situating it in a particular location within a coordinate system relative to other data points. (The coordinate system can be a real-world system or a locally created one used to meet the needs of a particular project.)
Spatial datasets, in general, come in two distinct forms, vector data (points, lines, and polygons) and raster (or pixel data). Raster and vector data can come together in the creation of a wide variety of mapping projects, from a traditional figure with an explanatory legend and caption, such as might appear in an academic text, to an online interactive platform that allows for the searching or filtering of thousands of pieces of spatial data or hundreds of historical maps.
Vector data includes points, lines, or polygons (shapes made up of straight lines) containing spatial information that represent some sort of feature or event in a physical or imagined landscape and may contain other types of qualitative or quantitative information, called attributes. A point may represent a tree, a city, or a moment in time. Lines might indicate the street grid of a town, the path someone traveled across the world, or a social link between two communities. Polygons can mark the boundaries of a country or voting district, the catchment area of a river, or a single city block.
For example, the relatively simple and ongoing World Travel and Description project from the Burn's Library collection pictured below uses vector point data to offer a selection of images and accounts from individuals and their observations about how the cities and landscapes they visited appeared. Users can filter the point data by data or search for particular location names in the search bar.
Raster consists of "cells" of data covering a specific area (its extent), with attribute values in each cell representing a particular characteristic. It may still consist of points, lines, and polygons, but these shapes are themselves composed of pixels (the way a jpeg or other image file type is).
Data of this type may take many forms, such as satellite imagery containing vegetation or elevation data, precipitation maps, or even an historical map, which has been given a spatial reference. Unlike vector data, raster data has a particular resolution, meaning each pixel represents a particular geographic region of a specific size.
Most projects combine various forms of vector and raster datasets.
Explore one of the databases discussed in the presentation and download an historical map of interest (or use one of your own if you already have one!). Then, using the MapWarper online tool, georeference your map with at least 4 points.
David Rumsey Map Collection (https://www.davidrumsey.com/)
Library of Congress Map Collection (https://www.loc.gov/maps/)
USGS Historical Topographic Map Explorer (https://livingatlas.arcgis.com/topoexplorer/index.html)
To look at your data against an historical Boston redlining map, we recommend using one of the "Residential Security Maps" from the Boston Public library.
Georeferenced version of the redlining map:
Map Warper is an open source map warper/map georectifier, and image georeferencer tool developed, hosted, and maintained by Tim Waters.
In Map Warper, it is possible to browse and download maps others have uploaded into Map Warper without an account. To georectify your own map, however, you must make one. This also allows you to easily return to your maps later.
All you need to create an account is an active email address. It may also be linked to an active Facebook or Github account.
On the top right corner of the page, click "Create Account"
Select a username and password and enter an active email address.
Click "Sign up"! You should quickly receive an email to confirm your account
Now that you are logged in, you can upload your own images to the Map Warper server in order to georeference them.
By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points. As a freely available tool, you should not expect Map Warper to store your map indefinitely; once it has been georeferenced, you should plan on storing your georeferenced map on your local hard drive or a file storage platform like GoogleDrive.
Clicking “Upload Map” on the main toolbar (note if you are not yet logged in, it will ask you to do so at this point)
Insert any available metadata and a description of the map. This is useful both for your own records and for anyone else searching for similar maps on the Map Warper server.
At the bottom of the page, choose to Upload an Image File from your local computer or from a URL. Once the file has been selected, click "Create"
Once the upload is complete, a new page will appear informing you that the map was successfully created as well as providing an image of the uploaded map.
Now the map is on the platform, but it does not yet have any spatial information associated with it. The next step is to use what are called "control points" to place your map in a “real-world” coordinate system where it can interact with other types of spatial data.
Note that you can also edit the original metadata fields, crop out unwanted portions of your map, and see a history of the interactions with the map at this point from the main toolbar
Once your image is displayed, select "Rectify" on the main toolbar.
This opens up the Georectifying page, the most important page in this tutorial. It is composed of two windows, one showing your map and one showing a “basemap” which you will be using to geolocate your map.
In the top right corner of each map there are a series of buttons that help you navigate the map and add control points
The goal here is to create what are called, “control points,” or points that are corresponding between your uploaded map and the basemap. This is done by simply zooming in on each map in turn and creating a control point as close to the same point as possible in each map.
The last two icons appear only on the basemap and are used to adjust it as needed to help with georeferencing
Navigate on your map to an easily identifiable location. In this example, I have chosen the tip of the island in the middle of Paris that the Notre Dame Cathedral is on. Note that an external mouse with a scroll wheel can make the zooming/moving process easier; zoom and pan buttons are also provided in each window.
Click the “Add Control Point” icon, then click again on your map in the desired location. A little control point should pop up!
Swap to the basemap and click the “Pan” tool (the hand) to find the proper location, then again select the “Add Control Point” tool and click on the corresponding point on the Basemap.
Once you have created a control point on each map, scroll down and click the “Add Control Point.” This will add the control point coordinates to a list of points below, which you can see by clicking the words “Control Points."
You will need at least 3 control points to geolocate your map, but more is preferrable. It is also advisable to spread your points across the map rather than have them clustered; this will ensure that the map is georeferenced equally across the map rather than only in one area. If you need to delete a point, this option is available from the "Control Points" table
Remember: places change over time! Try to use features that remain as consistent as possible on both maps. In general, the more control points you add, the more accurate your map will be.
After you add the 4th control point, your table of points will start including error information, as the points are triangulated against one another. Note that this error may not mean that you are doing anything wrong, particularly in an older map that is not as spatially accurate as something more modern! On the other hand, if your error is quite high and you believe your map is relatively accurate, you may have misplaced a control point somewhere. Usually, high error is caused by a single point being misreferenced.
When you feel like you have enough points scattered around your map, we are ready to georectify the map! Remember you can always come back later and add new points or remove old ones if you feel like the result is not to your liking. To georectify your map, just click “Warp Image!” at the bottom of the page and you’ll get a notice that your rectifier is working.
When the map is finished rectifying, you will get a notification that rectification is complete. Now, you should be able to see your map overlaid on the Basemap, as well as be able to turn it on and off or more or less opaque to check for accuracy!
If the map is to your liking, you are ready to export. Map Warper offers a variety of ways to export your map depending on your needs
To export your map, Select the Export tab on the toolbar. A window like that seen below will pop up, giving you a variety of choices for exporting
Some exporting options:
GeoTiff:
public domain standard; easily imported into a wide variety of spatial platforms like ArcGIS or QGIS; good for backing up your georeferenced map on your local computer or in cloud storage like Google Drive
.kml:
Easy import directly into GoogleEarth
Tiles (Google/OSM scheme)
Useful for loading into tools like ArcGIS online and Knightlab StoryMap JS. Remember to ensure a backup of your files elsewhere though in case your map is eventually removed from Map Warper.
Adding your tiles to an ArcGIS online map can be complicated. From an empty map, choose Add --> Add Layer from Web and then select a "Tile Layer". Where it says “URL” copy over the Tiles (Google/OSM scheme) URL from your Map Warper file. It will look something like: https://mapwarper.net/maps/tile/49503/{z}/{x}/{y}.png.
However, note that the end of the URL should look like “{level}/{col}/{row}.jpg” according to the instructions given. Replace the {z}/{x}/{y}.png at the end of your URL with this ending, creating something that looks like: https://mapwarper.net/maps/tile/49503/{level}/{col}/{row}.jpg. It should now load properly into ArcGIS online
Add a control point to the maps
Move a control point that has already been added to the map
Pan around the map
Change the basemap to a Custom, already georectified basemap of your choosing (if available)
Swap between the initial basemap and satellite imagery, depending on which is easier to georeference your map
QGIS is a powerful open-source Geographic Information System platform with a bit of a learning curve...
QGIS (and its assoicate ArcGIS Pro/Desktop) are powerful tools used for a wide variety of purposes across many fields. And GIS in general is a huge subject, with entire degrees/certificate programs devoted to it. As such, we will barely scratch the surface in this workshop, but by the end you will be able to:
Search for and add pre-created basemaps;
Add your own georeferenced raster data;
Import basic vector datasets from csvs, shapefiles, and geojsons;
Create a new vector dataset from scratch;
Perform some basic styling of data;
Prepare your data for export.
That's a lot....lets get started!
For those interested in diving deeper, I highly reccomend reading the thorough QGIS Handbook; much of the below is taken from there and the step-by-step Training Manual. Links to specific parts of the handbook are referenced above.
When you load up a new map, everything is blank...lets get something to start with
Initially, QGIS only has a single basemap to work with, let's add it!
In your browser pane, find XYZ Tiles and click the down arrow to find the Open Street Map basemap. Click and drag it into your layers box, and it will appear on the map.
There are lots of different options for loading basemaps, but first you have to connect them to your project from their hosted locations online (or self host). Here's a fast way to get a bunch of standard basemaps though, thanks to Klas Karlsson (one of the main qgis devs)!
1) Download the python script below and open it in a text editor (e.g. notepad).
2) Open up the Python console in QGIS by going to Plugins-->Python Console
3) Copy the python script into the editor and press enter.
4) Enjoy all your basemaps! Thanks Karl!
What if you want to load your own georeferenced map, like one from MapWarper? Easy enough!
1) Right-click on XYZ and select "New Connection"
2) Look back at your mapwarper project page and choose the Tiles URL (the same one you use with ArcGIS online or Knightlab Storymaps
3) Name your connection and you should be good to go!
Finally, what about importing a local raster file, whether its a georeferenced historical map, a Digital Elevation Model, or some other raster. Even easier!
1) Make sure you know where the file is hosted on your computer, or download the file from the internet (Like with the rectified GeoTiff from Mapwarper)
2) In the main toolbar, go to Layer --->Add Layer-->Add Raster Layer
3) In the Source section, click the "..." and navigate to your saved georeferenced raster as your raster dataset, open it up, and click "Add" to add it to your map.
It may look the same, but this map is locally hosted!
That's it for the basics of adding rasters and basemaps to QGIS!
Lines, Points, and Polygons
Ok, we've got a background map, now what about vector data?
Adding vector data depends on what kind of format your data is in. Let's run through the standard three types.
To add CSV data to the map, go to Layer-->Add Layer --> Add Delimited Text Layer
Here I've selected a csv of all the places mentioned in a Jesuit Catalogue (about 30000)
And there you go!
Shapefiles are a very common file type for individaul spatial data layers and can be downloaded from spatial database sites like BostonMaps. To import a shapefile into QGIS:
1) Go to Layers-->Add Layer-->Vector Layer
2) Select your ZIPPED folder containing your shapefile as the file to be imported, and click Add!
And your shapefile (in this case polygons representing the open, public spaces of Boston, is added to the map!
GeoJson files are imported the same way as Shapefiles, though no need to zip since its just one file!
Up to now, we've been adding other people's data, or data we have generated in a spreadsheet....what if we want to creat our own data from scratch inside QGIS?
Instead of Adding a Layer, we Create a new one!
When making a new vector layer from scratch, you define its type and attributes similar to how you would in a spreadsheet.
Here I've created a Point shapefile called "Cities". To start adding to it, I just right-click and toggle editing on!
When editing is on, the small editing toolbar will become accessible
From left to right the tools are:
Current Edits allows you to manage your editing session. Here you can save and rollback edits for one or more selected layers.
Toggle Editing provides an additional means of beginning or ending an editing session for a selected layer.
Save Layer Edits allows you to save edits for the selected layer(s) during an editing session.
The Add Features tool will change to the appropriate geometry depending on whether a point, line or polygon layer is selected. Points and vertices of lines and polygons are created by left clicking. To complete a line or polygon feature right click. After adding a feature you will be prompted to enter the attributes.
Features can be moved with the Move Tool by clicking them and dragging them to the new position. Individual feature vertices can be moved with the Node Tool. Click on a feature once with the tool to select it, the vertices will change to red boxes. Click again on an individual vertex to select it. The selected vertex will turn to a dark blue box. From there the vertex can be moved to the desired location. Additionally, edges between vertices can be selected and moved. To add vertices to a feature, simply double click on the edge where you want the vertex to be added. Selected vertices can be deleted by clicking the Delete key on the keyboard. Features can be deleted, cut, copied and pasted with the Delete Selected, Cut Features, Copy Features, and Paste Features.
After clicking to create a new Feature, a box to fill in the attributes you've chosen will appear. Below are the attributes for my pretend Cities shapefile.
You can access and edit all the attributes from your current shapefile by right clicking the name of your layer and selecting Open Attribute Table.
Now your data is in....how to make it look good?
Let's look at different ways to style your data using some archaeological data from the site of Gabii originally imported as a .geojson.
When initially imported, all the polygons are red with a black outline, which gives us some general information about the location but doesn't really differentiate the different kinds of archaeological features present. To do that, we need to change our styling.
To get to the styling options (also called Symbology), just double click the layer name in your Layers pane.
Here you can select how you want your features to be styled; the default is "Single Symbol" but often times you will want Categorized, or for quantitative attibutes, Graduated, ways of displaying your features. For now, let's select Categorized.
When stylizing by category, you can select which Attribute(s) you want to stylize by; I'm going to select the Descriptio(n) field because that's the different types of features in my layer.
Once you've picked your attribute, clicking "Classify" at the bottom of the box will assign an initial style to each of your features.
Each feature type is now a different color! You could further customize each individual style if you wanted on the symbology page.
Or, here I have made the color of the outline of the feature change, rather than the fill, in order to make the map more understandable.
Lots can be done with symbology; feel free to explore!
Exporting your figure to share with others or for publication is often the final product for your QGIS map. Here we talk about the basics.
So far we have been working in what I call "Data View" where you can create, edit, and generally mess with your data. Now we are going to look at Print Layout, which allows you to organize your map for publication.
To access print layout, go to Project-->New Print Layout. A box will appear asking for you to name the layout (useful if you are creating mulitiple figures, for example).
A new window will appear that lets you create your layout, almost like working in a word document.
The left toolbar is your friend, allowing you to add map windows to the figure, as well as things like maps, north arrows, titles, legends, and other labels.
Each time you add a feature, it will appear in what is basically the "Layers" sidebar of your print layout, allowing you to further edit its properties. Above I've quickly added a scale bar, north arrow, and legend to the map.
When it looks good, you can export the figure! My figure below is a .tiff
"Story maps" combine geographic information with text, media, and other story elements. In our exercise, we will be using ArcGIS' Story Maps. Another common tool is Knightlab's StoryMaps (see tutorial).
A Story Map About Story Maps (uses the full version so there are more features that the public account version of Story Maps allows)
(use your ArcGIS account to sign in)
Explore some to...
Get a sense of how they organize and present information.
Identify one to focus on and consider what you think is effective and less effective about it.
Open the & and keep them handy for the exercise
Sign in to using your ArcGIS account credentials
Click on "New Stoy" (upper right button) and select "Start from Scratch"
1.) Adding Text:
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu.
2.) Embedding an image:
3.) Embedding a video:
4.) Adding an ArcGIS Map:
You will be bringing the map you created on Tuesday
5.) Creating a Side Car:
Robert Morris photograph
Roberts vs. City of Boston Argument book title page
Anti-segregation Massachusetts Legislature document, 1851
Anti-segregation protest at Boston School Committee headquarters photograph:
Project Examples:
()
()
(an already prepared dataset)
1.) Go to
2.) Cut and paste the URLs below to the Scrape box (right side) and click Scrape
The URLs are to text files on of Frederick Douglass' Narrative of the Life of Frederick Douglass, an American Slave; My Bondage My Freedom; Abolition Fanaticism in New York; and Collected Articles of Frederick Douglass
3.) Click on Prepare and then Scrub
Select: "Make Lowercase," "Remove Digits," "Scrub Tags," "Remove Punctuation," and "Keep Hyphens"
Click Apply
4.) In the Lemmas box add the below, click Apply, and then Download.
5.) Locate the download text files and open one. We will discuss the results.
3.) Explore some of the lemma words (above) with different Voyant tools. Look at the different ways you can view their frequencies, relationships to other words, and their locations within the texts.
1.) What tools do you find most useful or promising be it for analyzing these texts or texts you are interested in exploring.
2.) What might be some of the challenges and pitfalls of Voyant as you understand it so far?
3.) Are there anyways you can see text analysis (in Voyant or another tool) fitting in with your own research or teaching?
In this exercise, you will be using a small dataset of BC dissertations focused on segregation in Boston schools to conduct a word frequency analysis on the dissertation abstracts.
1.) Import Libraries
Import Pandas (a library for analysis and manipulation tool), CSV (a module for reading and writing tabular data in CSV format), Natural Language Tool Kit (a platform used for building Python programs that work with human language data for applying in statistical natural language processing).
2.) Import Data
3.) Preview Dataframe
a.) Cut and paste and run the code to see the dataframe.
b.) Change df.head() to df.head(8) and run the code again.
In this section, the text is cleaned. All of the cleaning is being applied specifically to the abstracts field since that is what we will be analyzing.
1.) Remove Empty Cell, Remove Punctions, to Lowercase
The following code
Gets rid of records (rows in the dataset spreadsheet) that do not have abstracts.
Removes punctuation so that it will not affect how tokens are created and how words are counted
Make all uppercase letters into lowercase because otherwise, words having upper case letters will be counted as different than the same words in lowercase. For example, "Education" and "education" will be counted as two different words.
After the code is run, a new dataframe will show the changes that have been made.
2.) Tokenization
Here the Natural Language Tool Kit (NLTK) library is being used to tokenize the text so that each individual word is a token. 'Punkt' is the Punkt Sentence Tokenizer, an NLTK algorithm that is being incorporated to tokenize the text.
Again a new dataframe will be created. Note that the new column abs_tokenize has been added. This is where the tokenized text is. The abstract column remains untouched.
3.) Stopwords
a.) Download the NLTK stopwords list:
b.) Apply stopwords and add changes to abs_nostops column:
c.) Add customized stopwords not included in the NLTK:
d.) Apply stopwords and add changes to abs_nostops column:
1.) Counting Words
a.) Count words in abs_nostops field:
b.) Use Counter, a container that keeps track of how many times equivalent values are added, to calculate word frequency:
c.) Display the 15 most common words:
2.) Add more stopwords
Return to the code written for step 3.d own_stops = {'study', 'school', 'schools', 'public'} and add additional stopwords that you think should be added.
3.) Visualization
a.) Import Matplotlib, a comprehensive library for creating static, animated, and interactive visualizations in Python.
b.) Identify the 30 most used words.
c.) Display the results in a bar chart with the words (x value) and a blue bar (y value).
1.) Download the or use the one you created in part one.
2.) Upload the dataset to
(Internet Archive - search"boston public school," facets selected: Texts, Always Available)
(Internet Archive - search "boston health," facets selected: Texts, Always Available)
for a variety of pre-1923 books.