Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Getting Started with Digital Exhibits provides an introduction to digital exhibit creation. Much of the information here originated with a workshop (see slides) and is related to Introduction to Digital Exhibits found in the Digital Scholarship Handbook.
This tutorial is not platform-specific. These Omeka instructions, however, will help you get started on the commonly used platform where you can apply what you learn here.
Digital exhibits are a form of online exhibit that, like their physical counterparts, use objects to tell stories, make arguments, and demonstrate ideas. Attributes include:
Objects can be digitized or “born digital” media of various types (e.g., digitized photographs, rare books, films, or born digital government documents).
Special attention is given to the organization of both the objects and the site in which the exhibit lives.
They might include a digital collection(s) component or interactive elements such as maps and timelines.
Before starting the exhibit creation process, you need to consider closely your topic, desired effects, and objectives with the understanding that the decisions you make might change as you progress.
1.) Determine the Topic
What is the main focus or theme of the exhibit?
Some examples: a historical period or movement, an event, a person's biography, a process or technique (e.g., silk screen printing), an idea or concept (e.g., the law of gravity), an industry (e.g., whaling), a single object (e.g., a specific book, painting, or musical instrument)
2.) Determine the Desired Effect
Effects to consider (as cited in Barth, et al. 2018):
Aesthetic: designed to showcase the beauty of objects
Emotive: designed to elicit an emotion in the viewer
Evocative: designed to create a specific atmosphere
Didactic: designed to teach the viewer about a specific topic
Entertaining: designed for the amusement or enjoyment of the viewer
3.) Determine the Objectives
What do you want people "walking away" with? This means considering things like:
What is the motivation for creating the exhibit? (Why this exhibit?)
What are the intended learning outcomes?
How do you want visitors to be able to apply what they learn beyond the exhibit?
References
Barth, G. L., Drake Davis, L., Mita, A. (2018). "Digital Exhibitions: Concepts and Practices". Mid-Atlantic Regional Archives Conference Technical Leaflet Series no. 12.
In addition to the design and structuring of the exhibit site, usability and accessibility are also necessary aspects to consider when building your project.
Usability means how effectively users can interact with and navigate a site.
Accessibility means how effectively users with disabilities or technological disadvantages (e.g., slow internet access) can use a site and how equivalent their experience is with the experience of users who do not have such challenges. Often steps taken to better serve the disabled also serve those who are technologically disadvantaged, e.g., video transcripts can serve the hearing impaired and those who do not have the internet bandwidth to play a video.
Visual Design - A good rule to follow is to keep it simple. A lot of different colors, decorative elements, and busy layouts can make the exhibit more difficult to navigate and the information more difficult to process. This is especially true for those with disabilities.
Page Organization and Layout - Breaking up the text into logical sections makes it easier to read. (Avoid having a "wall" of text.)
Headers - Use headers and subheaders to help logically break up the content on a page and to introduce/frame the information to come. This will make the page more readable and easier for screen readers to navigate.
Contrast - There needs to be strong contrast between the background and text. This means the colors need to have enough difference between them (both in intensity and hue) and that text is not on top of an image that makes it difficult to read. (There are browser extensions for checking contrast.)
Fonts - Avoid using ornate and difficult to read fonts and avoid using all caps.
Alt Text - Images should have alt text. Read more:
Captions - Videos should have captions.
Transcripts - Audio files (e.g., podcasts) should have transcripts; having transcripts for videos is also helpful and necessary if captions are not possible.
The following resources provide more helpful information on accessibility:
Web browser extensions list for checking accessibility, e.g., contrast and screen reader functionality
While every digital tool on this site can be used for "digital storytelling," the following are tools that have been specifically designed with storytelling in mind. Also, see the Multimedia Production Guide for information on video and podcast creation.
After considering the topic, desired effects, and objectives follow the steps below. (The order is suggested and not entirely necessary.)
1.) Select the objects that will be featured
The object selection process requires careful consideration and includes determining whether any digitization is needed and whether there are any intellectual property concerns. (See the Intellectual Property libguide for more on the topic.)
2.) Determine how you want to organize your objects
The order objects will go in, the way they are grouped, and how they are juxtaposed are major components of the curation process. It is helpful to play around with the order, even if only to ensure your chosen order is the best one.
3.) Select your exhibit platform
Choose the platform most appropriate for the content and the desired type of interactivity.
You may already know which platform you plan to use or the one you have to use based on what is available to you. If not, it can be helpful to base your platform selection on the content you have as opposed to shaping your content to fit into a given platform.
4.) Write your text
Write exhibit content for the introduction page, object labels, and group labels. (Also see "writing for the web," a topic that applies when writing for an online audience.)
Introduction page: Information that introduces or "sets the scene" of the exhibit
Object labels: Information that describes and contextualizes exhibit objects
Group labels: Information that explains how a group of objects are related, contextualizes the objects, or simply introduces them as a whole
This step requires consideration of the intended audience: Is the exhibit intended for a general audience? Or, is it intended for an audience with preexisting knowledge about the topic? The answer to these questions will help determine how much explanation is needed and will influence word choices.
Write accessibility text, i.e., alt text, video captions, transcripts (see Usability & Accessibility).
5.) Determine the site organization
Design the site's information architecture (structure, navigation, taxonomy, page layout, etc.) (See Site Organization).
The choices made here shape the flow of the exhibit and the interactive experience.
Organization choices have a major impact on exhibit usability and accessibility.
6.) Make Design Decisions
Make design decisions, e.g., choose fonts, colors, landing page image(s), etc. (Read Visual Design Basics).
Design decisions have a major impact on tone and usability and accessibility.
BCDS' DS Learn provides digital scholarship learning resources created by the Boston College Libraries Digital Scholarship Group. Visit our website to learn more about BCDS learning opportunities and our DS Handbook to learn more about DS methods and concepts. For questions or comments, please email us.
The following are some major concepts that should be considered when creating the site in which the exhibit lives.
Information Architecture (IA) - The structure of the site (e.g., the site hierarchy, the way pages link to each other) and the labeling of information so that it is understandable and navigable. (Read Information Architecture Basics and see below.)
Taxonomy - A component of IA that involves the language chosen to label menus/navigation bars, page headers, etc. (Read Website Taxonomy Guidelines and Tips.)
Navigation - The means by which one moves within a site and finds specific information, e.g., the site navigation menu. (Read Supporting Navigation and Wayfinding.)
Wayfinding - Related to navigation, wayfinding is how one knows where they are within a site and if they are able to find their way back to information to which they previously navigated.
Page Organization and Layout - The way content (text, images, video, etc.) is organized on a page. (Read What Do Common Web Layouts Contain? and see below.)
When planning your site, it is helpful to sketch out your information architecture in diagram form and draw wireframes to plan page organization and layout.
The following provides an example of an information architecture diagram for an exhibit on the history of cats. Looking at the diagram, you can get a sense of how the curator decided to organize the exhibit's content logically (which could have been done in a multitude of ways) and structure the site in which the exhibit lives accordingly.
The exhibit site has a hierarchy of only two levels: the Home page is the top level and the rest of the pages are the second level. The terms History, Breeds, Culture, and Readings are part of the site's taxonomy and will be the labels in the navigation menu that users click to find their way to the various pages.
Accessibility image description: The diagram shows the structure of the site, which is made up of ten different web pages. At the top (or the first level of the hierarchy) is the Home page, represented by a rectangle with the word home written on it. Below that is a row of terms that run left to right. They are History, Breed, Culture, and Readings. Lines make them look as if they are branching off of the Home page. (These terms will be the labels in the navigation menu.) Branching off of these terms and situated below them are rectangles that represent pages. These pages make up the second level in the site's hierarchy. Branching off of the term History are the pages Wild Ancestors and Domestication. Branching off of the term Breeds are the pages Short Hair, Medium Hair, and Long Hair. Branching off of the term Culture are the pages Cats in Art, Cats in Literature, and Cats in Animation. Branching off of the term Readings is the page Further Readings.
Wireframes are simple drawings of web pages that help with planning the organization of content and layouts. Below is an example of a wireframe for a page in the History of Cats exhibit. Here you can see that when one clicks on Culture, options for the pages for cats in art, literature, and animation appear.
Cats in Art is the page's main header (also known as header 1 or h1) and lets users know the page's title and the overarching topic. The other headers, Ancient Representations and Classical Cats, are second-level subheaders (also known as header 2 or h2). Each subheader introduces or frames the content to follow and is assumed to apply until a new header is used. If Ancient Representations, for example, had subsections like Ancient China and Ancient Greece those would be header 3 or h3. Notice the pattern forming? Like the structuring of sites, pages are also structured using a hierarchy.
Accessibility image description: The image shows a barebones black and white drawing of a web page. At the top is the exhibit's title, The History of Cats. Below that is the navigation menu, which has the options Home, History, Breeds, Culture, and Readings. As if someone is clicking on the term Culture, a dropdown menu appears just below the word and has the list of links Cats in Art, Cats in Literature, and Cats in Animation. The featured page is Cats in Art. Those words appear as the main header on the top left of the page just below the navigation menu and are in the largest font on the page. Below this header are large rectangles that function as placeholders for what will be three images in the future. On the right side of the page just below the navigation menu is the subheader Ancient Representations, which is in the second largest font on the page. Below this subheader, there are lines that abstractly represent a block of textual content that will be added later. Below that content is another subheader, Classical Cats, which is the same font size as the previous subheader, and below it are also lines that represent a block of textual content.
The platform you select will influence decisions about exhibit organization and design, among other things. You may start out knowing what platform you want to use, or you may change your mind about the chosen platform based on the way you want to organize your exhibit, the way you want visitors to interact with it, and whether you want to have the object available in digital collection form as well.
Some things to keep in mind while considering platforms include:
Ease of use and customization
Whether there needs to be a searchable collections component
Whether exhibits should be navigable linearly, non-linearly, or both
The types and level of interactivity desired
Accessibility for the disabled
Some common platform options:
Omeka ( and ) []
- []
Wordpress (. and .) []
[]
For more information, see by Dr. Pamella Lach
A series of workshop materials and tutorials based on subjects surrounding 3D modeling, AR, and VR
Below are several tutorials focused on different aspects of 3D modeling, AR, and VR project creation. We will be expanding on these in the coming months, so if there is a particular topic you are interested in feel free to contact the BC Digital Scholarship Team at digitalscholarship@bc.edu!
TimelineJS is an easy-to-use timeline visualization tool that uses Google Spreadsheets. (See examples.)
Timelines created in TimelineJS are data visualization. The data is input in the spreadsheet and the TimelineJS platform translates that data into a visual representation.
The tool's site provides instructions. Simply click "Make a Timeline."
From there you will be walked through four major steps:
Making a copy of a Google spreadsheet. (You can use your BC or personal Google account.)
Publishing the spreadsheet, which is necessary for it to be visualized
Cutting and pasting the spreadsheet link to visualize the timeline
Getting the timeline link and embed code for sharing the timeline or embedding it into a site
The spreadsheet contains a template that will guide you through the information entry process. Do not make edits to the spreadsheet's headers as they are formatted specifically for the TimelineJS platform.
The headers represent the kind of information that should go in each column. The spaces where you put the information are fields. Two of these fields, Year and Headline, have to have an entry in order for the timeline to work. The rest are optional.
Field explinations:
Year, Month, Day, Time (Year is required) - Sets the date/time to varying degrees of granularity
End Year, End Month, End Day, End Time (optional) - Sets end date/time to varying degrees of granularity
Display Date (optional) - How you want the date to appear on the timeline
Text fields:
Headline (required) - The title or name of the event
Text (optional) - The or narrative information
Media fields:
Media (optional) - Media URL (web address)
Media Credit (optional) - Text that gives credit to media creator and/or distributor; here you can create a link that takes users back to where the media originated (see "Creating Links in Fields" below)
Media Caption (optional) - Text that describes the media
Media Thumbnail (optional) - Link to a thumbnail image
Additional fields:
Type (optional) - Indicates which slide is the title slide.
Group (optional) - A way to show slides are related
Background (optional) - Change the background to a color (use hexadecimal color code ) or an image (use an image URL)
The spreadsheet is where the majority of the digital aspects of the labor occur, be it intellectual or technical, as it takes time to fill in and to do so with accuracy. The other labor-intensive aspects are selecting the events that will be addressed, the research component, writing the text, and finding or creating the media.
It's a good idea to go ahead and visualize the timeline while working on it so you can keep an eye on how it is looking and to make sure there aren't any errors. To visualize the timeline you follow step 3 "Generate your timeline" and 4 "Share your timeline" as directed on the TimelineJS site. Keep a copy of the timeline URL generated during this process as this is how you will access and share the timeline. This is also how you get the embed code for embedding the timeline in things like websites.
When the timeline does not work after having attempted to visualize it, it is most likely an error in the spreadsheet. Common errors include a row that sits between filled in rows is left blank or a year or headline is not included for at least one of the entries. After fixing these errors by deleting the blank row or adding the missing year(s) or headline(s), try visualizing it again.
Media must be hosted online as it cannot be uploaded to TimelineJS. Adding media requires that you use the specific media URL that links directly to the image or video.
To get a video found on YouTube or a similar platform, select “Share,” which is right below the video, copy the URL, and paste it in the Media field.
How to get an image URL:
On a Mac hold down the control key and click on the image then select: “Copy Image Address” or “Copy Image URL" depending on which browser you are using.
On a PC right-click on the image then select: “Copy Image Address” or “Copy Image URL" depending on which browser you are using.
It will look something like the following:
It's easy to make mistakes with images and get the page URL or a thumbnail version instead. If you get the page URL an error will appear where the image should be in the timeline. If the image appears too small, you have clicked on a smaller version of the image and copied the URL. In this case, go back and make sure you click on the larger version to get the address.
You can create hyperlinked text in the Headline, Text, and Media Caption fields using the basic HTML tags:
<a href="Source URL">Hyperlinked text</a>
For example, if you want to add a media caption like, Mary Wollstonecraft by John Heath, c. 1797 (Wikicommons), and you want to have Wikicommons be a hyperlink to where the image can be found on the site you would write:
Mary Wollstonecraft by John Heath, circa 1797 <a href="https://commons.wikimedia.org/wiki/File:Mary_Wollstonecraft_by_John_Opie_(c._1797).jpg">(Wikicommons)</a>
In the timeline caption location, the above will appear as:
Mary Wollstonecraft by John Heath, c. 1797 (Wikicommons)
StoryMaps provides a number of content block styles to choose from. Please note that public ArcGIS accounts do not have the embed, audio, and image gallery available.
View this StoryMap for a demonstration of the different kinds of content blocks.
To add a new content block, click in the area just above or below an existing content block. (The plus sign is automatically there when first launching a new story.) It will look like the following:
You will see the various content block and element options:
Below are descriptions of the different content blocks with links to instructions created by ArcGIS. They are organized as "Basic," "Media," and "Immersive," which is how they are organized in Story Maps.
Text - a text block for general textual content, headers, bullet point, lists, quotes, etc.
Button - adds a button that can be linked to outside sources.
Separator - adds a line to break up content on the page, used to make content more readable and navigable
ArcGIS instructions:
Map - create "express maps" in Story Maps or bring in ArcGIS map
Image - upload or link to images
Image gallery* - create an image gallery in the form of a grid of images
Video - upload or link to videos
Audio* - upload or embed sound clips
Embed* - embed different types of media (e.g., a TimelineJS timeline or a 360 degree photograph)
Swipe - create a swiping effect between two images or online maps
*Not available with the free version.
ArcGIS instructions:
See ArcGIS's Getting started with ArcGIS StoryMaps sections: "Building a Narrative," "Add an Embed," and "Make a Map"
Also, see: Add maps, Add media, and Add swipe blocks
Slideshow - create a more traditional slideshow
Sidecar - create a moving side panel for text and media independent of the main panel, also allows for text blocks to overlay and move across media
Map tour - create an interactive map that combines images, text with a map
Note that the Slideshow option has been incorporated into the Sidecar layouts.
ArcGIS instructions:
See ArcGIS's Getting started with ArcGIS StoryMaps sections: "Add Immersives"
Also see, Add sidecars, Add slideshows, Add map tours
David Rumsey Historical Map Collection has a wide variety of historical maps
The David Rumsey Historical Map Collection has a broad collection of historical maps that you might use as the base map for your ArcGIS StoryMap. These steps will walk you through the process.
1) Go to and log in with your free account.
2) Locate the map in the database that you'd like to import to ArcGIS; the maps can be filtered by location and date. Note that for the map to be imported directly into ArcGIS Online, it must already be georeferenced within David Rumsey (indicated by the orange "View in Georeferencer" button).
3) Click the orange "View in Georeferencer" button and the map will open up as an overlay. From here, click the orange "What Next" button in the bottom-right corner of the screen followed by "Go to this Map page" button.
4) A page containing map metadata and other information about the map will appear. In the section on the left called "Use in GIS apps," click on "Get links" and then copy the URL under "XYZ link".
6) To add your map from David Rumsey, click the "Add" button followed by "Add Layer from Web".
7) This will open the add layer from web dialog box. When it asks what type of data you are adding in a drop-down box, choose "A Tile Layer". This will expand the box and create a place for you to paste the XYZ URL from David Rumsey. Do so, and add the proper title and credits for your map.
8) Click "Add Layer" and the layer should be added to your map (you may have to move to the location to see it depending on the size of your map. Save by pressing the Save button in the top menu bar. Finally, make your map public by clicking the "Share" button and choosing "Everyone (public)".
10) In your new blank tour, click the "Map Options" button in the top right corner of your empty map, then "Select Basemap" in the left-hand pane that opens up, followed by "Browse More Maps". This will open up a window showing all your self-created maps, including the one you just created
Now you can start adding points to your map tour on you historical map!
ArcGIS StoryMaps allows you to use a variety of maps, text, and multimedia elements to present interactive narratives. Since ArcGIS has a number of helpful StoryMap tutorials and instructions available, the information here is relatively brief and often links out to those resources.
Creating a good storymap requires good planning. You may find it helpful to sketch out your ideas on paper (in the form of a diagram, an outline, or a rough illustration) prior to working in the actual technology.
StoryMaps uses "content blocks," which include objects ranging from text blocks to interactive maps to images. See content for examples of content blocks.
When adding media of any kind, you are given the opportunity to add alternative text for the visually impaired. You want to do this unless the image is purely decorative and does not add to the content/narrative. ()
To add alt text, hover over the media and then click on "Options" (the gear icon). Depending on the content block type, you will see either a field that says, "Alternative text" or you will see a choice to click on "Display" or "Properties." Click on "Properties" and you will then see an "Alternative text" field.
StoryMaps lets you adjust the look/design within a limited range.
The URL (web address) that you get when you "View published story" is your project's front-facing URL and is the link you share out.
5) Return to and sign in with your free account. Once signed in, click the "Map" button in the top menu bar to open a blank map.
9) Now it's time to add it to an empty StoryMap! Go to (it may or may not ask you to log in with the same information as ArcGIS Online). Create a new blank StoryMap by selecting "New Story" --> "Guided Map Tour"
You can keep your story private or publish it for your audience. You have to republish your work for any changes made to be visible to visitors. Before publishing or sharing your story you can . The publishing and sharing options (among others) can be found in the menu at the top of the page:
With the use of a couple of plugins, both Omeka S and Omeka Classic allow you to incorporate 3D models into your digital exhibits. Getting both your site and the 3D models in the proper form, however, takes a bit of work, particularly for Omeka S.
While both Omeka S and Omeka Classic display 3D models using the Universal Viewer plugin, there are a few others that are necessary to ensure the framework is properly set up.
Plugins/Modules to Install in both Omeka S and Omeka Classic
Universal Viewer - This is the actual viewer that can display both 3D models and normal images/pdfs/etc. Its design is much nicer than the standard Omeka presentation view.
Archive Repertory - Keeps original names of imported files and puts them in a hierarchical structure (collection/item/files) in order to get readable URLs for files and to avoid overloading the file server.
Additional Modules for Omeka S
IIIF Server: Integrates the IIIF specifications to allow you to share instantly images, audio, and video.
Image Server: A simple and IIIF-compliant image server to process and share instantly images of any size in the desired formats.
As of the writing of these instructions, installing the Image Server module required the ability to directly access and install the module on the server, rather than copying the filer over using the FTP/SSH and installing from the Omeka Module page.
To display 3D models, we use the Universal Viewer plugin, which uses the ThreeJS library to display the models. This means that the model needs to be in a .json format to upload the file to Omeka. There are a variety of ways to do this, but an easy method is to use the ThreeJS editor.
Open the ThreeJS Editor Link.
Upload your 3D model (OBJ, gltf, etc.) by going to File --> Import and navigating to your 3D file.
The mesh (geometry) of your model will appear. The next step is to attach the texture (color overlay) to the model, if it is not already integrated into your file. To do this, click the "+" next to the name of your file on the top of the right sidebar and select the material associated with it (generally "material_0"). Then choose the "Material" tab below.
From here, where it says "Type," select "MESHBASICMATERIAL," which should be at the top of the list of choices. Next, click the dark rectangle next to "Map" a bit further down the sidebar, and navigate to your texture (usually a .jpg, if you are uploading an obj). Finally, check the small box next to "Map" and the texture should appear.
Your model should look good now! The last step in the editor is to go to File--> Export Scene, and the model should be exported as a .json to your downloads folder.
Your model is now in the correct format (.json) to be uploaded to Omeka. In each case, you will want to be sure that JSON extension and the application/JSON file type are allowed (in Omeka Classic, go to Settings--> Security from the top of the Admin dashboard; in Omeka S, go to Settings in the Admin sidebar and scroll down to the Security section).
Now just create a new Item as you would any other item in Omeka, with the .json file as your main file upload! Note that if you want a proper thumbnail in Omeka Classic, you should upload two files for your item, first a normal jpg of the thumbnail you want to use, and then the .json file of your 3D model. In Omeka S, there is an option when creating your item for adding a thumbnail image if you wish.
Blender is a free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, video editing, and animation.
In this tutorial, we will be using 3 programs, all of which are free to download:
(available on PC, Mac, or Linux)
(free on App Store or with Apple ID for MacBook)
(free on App Store with Apple ID for MacBook)
When starting a new Blender project, this is the first screen you seen. While it may look daunting, we will only be using a few tools in this tutorial to create new objects and perform minor edits on them.
The vertical toolbar on the left side of the screen is where we'll find most of the options we need for object manipulation.
This toolbar is very similar to what you might see in Photoshop. When an object is selected, these buttons let you edit its basic properties in the viewer window. Most important for our purposes are the Move, Rotate, Scale, and Transform buttons (3rd-6th buttons counting down, in the group of four tools). These let you edit your object in the standard ways you would expect.
A slightly more advanced tool that may be useful is the Extrude tool, which allows you to take a single face of your object and pull it outward. To access this tool, you must first change the mode you are in from Object Mode to Edit Mode from the drop down just above the manipulation tools.
Now, extruding our default cube once we are in edit mode takes just a few steps. First, just to the right of Edit Mode on the toolbar are 3 options that let us choose whether we want to select points, edges, or faces. The third option allows us to choose a face to extrude.
Now pick which face of our default square we want to pull outward:
Finally, one of the choices on the expanded Object Manipulation toolbar on the left is called Extrude Region. Choosing this tool will let you pull outward, expanding your object in one direction.
The toolbar on the right displays both the collection of objects within our current scene and the properties of that object.
The top of the pane lists all the objects in the current scene and allows you to select them or make them visible/invisible (again, very similar to Photoshop). The lower window displays the object properties, which can be edited directly here. This includes the manipulation properties which you can also edit in the viewing frame itself. There are many other properties, but we don't need to worry about them for now.
Blender comes with a number of pre-created basic objects that you can then manipulate to create what you want. To add a new object, just go to Add on the toolbar next to Object Mode.
Once you have chosen your mesh, you can manipulate it or extrude it as needed to create the object you want.
Apple's Reality Composer program allows you to import .usdz 3D files. Unfortunately, Blender doesn't let you automatically export files in this format. I recommend exporting your object as a .glb file, as this will let you keep the texture, then you have two options for a quick conversion:
Note that you cannot change the color/texture of imported objects once they are in Reality Composer. As such, you should color/texture your objects in Blender and export as a .glb to keep the material attached. This can be done in the "material" options under the properties of a specific object.
StoryMap JS allows presenters to tell a story or present an idea through interactive maps and images. Tutorial created by Emma Grimm, BC Class of 2022.
StoryMap requires that you make an account before starting your own project. This account can be connected to your BC Gmail address.
Click on Make a StoryMap
Before you can make a StoryMap, you will be directed to sign into your Google account. By doing this, all of your information will be saved and you can easily edit any of your projects.
Once you have created your account, you will be able to make your first StoryMap or choose from one of your existing projects to edit. If you would like to start a new project, click the button in the bottom right-hand corner.
As you can see, I have two projects already started. Since I am starting a new project, I will press "new" in the corner.
Then, you will have to title your project before you can continue. In my example, I will title my project "Boston," as I will be creating an interactive map that displays different locations in Boston, Massachusetts.
Once you have named your new project, you will be ready to customize your map. Press "create" to start. Below, you can find more information about how to navigate StoryMap and customize your project.
Once you create a new story map, the standard display seen below will appear. It is from here that you will begin to create your story.
By clicking on Options in the top left corner, you can choose the map you would like to use for your presentation. As shown above, you can customize your map by selecting the
StoryMap size
Language
Fonts
Call to Action
The Call to Action is the phrase that will be displayed on your cover slide as a guide for your reader to start viewing. The default Call to Action is "Start Exploring".
Map Type
In my example, I will choose the "Stamen Maps: Watercolor" as the map for my project. The font that I chose for my project was "Clicker & Garamond". I will also define my call to action as "Start Your Tour". All of the other settings I left as the default.
For your cover page, you can choose to either upload an image from your computer or insert a URL address from online. Then, you can insert credit for your image and a caption.
The Cover Page image serves as the cover to introduce your viewer to the topic. This image is not necessarily the same image as your map, but you can choose to use the same image.
In my example, I chose to upload a picture of the skyline of Boston that I had saved to my computer in "Downloads". Then, I gave credit to my source by providing the URL in the designated box. My changes are shown below.
The headline for the first slide will serve as the title for your project. For each subsequent slide, the headline will act as the subtitle for each section of your presentation. You can provide a brief description in the box below the title box to describe your slide and the point on the map that you are describing.
In my example, I chose my headline to be "Tour of Boston". I also provided a brief caption to describe my presentation, which says "Through this presentation, I will show you some of my favorite places in Boston, Massachusetts!". You can view my changes below.
You can customize the color and image that will be displayed in the background of each slide.
In my example, I chose an image of the skyline of Boston as my background image. I chose not to change my background color, though it is an option.
Now that you have finished your Cover Slide, press "Preview" at the top of the screen to view your work and make edits as you see fit.
Notice in my presentation, some of the map that we selected under "Options" is displayed on the left side. Then, on the right side, my background image of the Boston skyline is shown in the back. The street view of Boston is then shown on top as the media image. The title of my presentation is also presented with my caption, as well as my "Call to Action".
On the left hand side of the screen, you can add new slides to your presentation. Just like a PowerPoint or Google Slides presentation, each slide shows the new place on the map that you would like to focus on.
Knight Lab's StoryMap functions similarly to a Google Slides or Powerpoint presentation with its use of slides. This way, you can present your ideas and stories in an organized manner.
Once you establish what map you will be choosing, on each slide you can decide where you want the map to focus on by typing in a location or coordinate from that map, or by placing the marker over the area you want to focus on.
In my example below, I searched for "Boston" on the map and it gave me locations throughout the world. You can also choose more specific locations or terms, such as "Boston College".
For each slide, you can customize the background and media, just like with your Cover Slide. Additionally, you can choose to customize the Marker.
On each slide, you can choose an image URL or upload a file from your computer to act as the marker for the areas you have designated on the map. The default marker is a red tag.
On the cover slide, it will show which areas of your map have been selected by displaying your marker, such as the one in the image below.
In my project, I chose to customize my marker to the location I was describing in the slide. As you can see below, I chose to talk about Faneuil Hall Marketplace as my first destination in Boston and I chose the marker that displayed the market.
Once you have selected multiple places on the map, the locations will appear connected by a red dotted line. This way, you can display how your story will travel through the different locations. See how my locations connect down below.
You can switch between the editing view or preview by clicking on the tabs at the top of the screen. By doing this, you can see the progress that you have made on your StoryMap, and view your project from the reader’s perspective.
The preview of my presentation displays each slide. The viewer of your presentation can click the arrow on each slide to move through your presentation.
When you have finished your project, make sure to save and share! When you click on share in the upper right hand corner, it will provide a link which you can copy and send through email, text, or other platforms of social media. You can also provide a description and featured image that will be viewed first when clicking on the link.
Questions? Contact the Digital Scholarship Team in O'Neill library (digitalscholarship@bc.edu)!
Meshlab is a powerful open source tool for processing and editing 3D meshes. It provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing, and converting these 3D models. This very short introduction covers downloading the tool, importing an .obj 3D file downloaded from SketchFab, and taking basic measurements.
As an open-source tool, Meshlab is easy to download and install in just a few simple steps!
Head to the .
Choose your operating system (Win/Mac/Linux) and click to download.
Navigate to the download folder on your computer and double-click to start the installation.
Sketchfab is a hosting platform for 3D models. Many of these models are free to download and are listed under a Creative Commons license. Here are a few simple steps to download a model from the site:
A few choices of download types will appear: (a) the original format the file was uploaded in (in this case an "obj" file); (b) gITF, which is useful for web-viewing frameworks; and (c) UZDZ which is for alternate reality (AR) frameworks. For opening in Meshlab, the original format (.obj) is a good choice. Click "Download" and a zip file containing the .obj will download.
Once downloaded to your computer, unzip the file. Once unzipped, you will need to go into the unzipped folder (skull-a), open the folder called "source", and then unzip a second folder called "SkullA" which contains three files, including our .obj. Now you are ready to import the obj into Meshlab.
Once Meshlab is installed and you have downloaded and unzipped the model you want to open, start up the program. Your opening screen should look like this:
Once open, importing a model is easy. Go to File --> Import Mesh and navigate to the location where you unzipped your .obj file (in my case, the file location seen below). Select "SkullA_scaled.obj" and the file will open in Meshlab. It should look something like the screenshot below.
Meshlab is a powerful tool, but we are only going to be using the very basics for now. To start, you simply need to be able to rotate, zoom, and measure the object.
To look more closely at the skull, simply click and drag to rotate the model in various directions.
To zoom in and out, use two fingers on the trackpad or the scroll wheel on the mouse.
Finally, to measure the object, click on the little measuring tape in the main toolbar, which will look something like this:
Click on endpoints of the distance you want to measure. In the case below, I measured across the eyes of a different model, Skull B, with the result of approximately .095 meters, or 9.5 centimeters! Keep in mind that the units of measurement in Meshlab are in meters, so be sure to convert if need be.
Note you can open more than one 3D model at a time in Meshlab, though if they do not have associated spatial data, they might end up on top of one another. You can turn on and off meshes using the little "eye" icon next to the mesh name in the top right corner of the screen.
This is a guide to installing and running Tableau Desktop on your personal computer. Please note that all workstations in the Digital Studio (on the second floor of O'Neill Library) already have Tableau Desktop installed.
Tableau has versions for both Windows and Mac. Detailed system requirements for Tableau here: .
Tableau Desktop is a visualization software used to create data visualizations and interactive dashboards. If you are a student, instructor, or researcher, you can request a free, renewable, one-year license for Tableau Desktop through . For instructor and researcher, the individual license is valid for one year and can be renewed each year if you are teaching Tableau in the classroom or conducting non-commercial academic research; The student license expires after one year; you can request a new license each year as a full-time student.
If you are a member of the public, please consider using instead, which is the free version of Tableau Desktop.
Here are the steps for students: (Installation process for instructors and researchers is similar. Just follow the instructions on the screen.)
Step 1: Go to (.)
Tableau Student
Step 2: Click on Get Tableau for Free.
Step 3: A web form will pop up. Complete all of the requested information, using your official BC email address when you fill out the form.
Step 4: Next, click on Verify Student Status.
Step 5: You will receive an email with a product key and link to download the software.
Step 6: Click on Download Tableau Desktop from your email and copy the product key.
Step 7: Follow the installation instructions to install Tableau to your computer.
Step 8: Activate your Tableau with your license key.
For instructor and researcher, click on Request Individual License on the screen.
The pop up request form is similar to the student one described above, but additionally asks "I plan to use Tableau Desktop for..." Under that popup, you can select "Teaching only," "Noncommercial academic research only," or both. Select the option that fits your needs best. You do not need to be an instructor to get a Tableau copy.
Tableau Public
Following are the general steps to download Tableau Public:
Enter your email address and click "Download the App".
Once the installation file has been downloaded to your computer, run it and follow the prompts to install Tableau on your Mac or PC.
Use Apple's to quickly drag-and-drop your model to convert it.
Upload your model to Sketchfab and then download it as a .usdz (see )
Go tothe .
StoryMap offers some maps for you to use to get started with your project, including a Watercolor Map. By clicking on Map Type, you can choose one of the maps that they have included or you can also upload your own map. Check out on importing maps to KnightLab from the David Rumsey collection or our tutorial on using to georectify and import your own image.
Go to . You will need to make an account with an email address and password to download models.
While many models are free to download, others require you to pay. Boston College Digital Scholarship has from around BC that are free to download once you have an account.
For this tutorial, we will be using from a biology class at BC, located within the Boston College Digital Scholarship Collection. To download the 3D files, simply open the page containing the model and click "Download 3D model" beneath the name of the model. All downloadable models will have this button.
Go to Tableau Public Download Page:
Join the Tableau Community Forums to find solutions for what you need to accomplish. Ask questions to receive help and feedback.
Get inspired by the many interactive visualizations in the Visual Gallery. Download the workbooks to play with on Tableau Desktop
Tableau training: How-to Videos
Lynda.com --Lynda.com has a great variety of training videos about Tableau.
Note: Lynda.com can be accessed with a Boston Public Library Card. Anyone residing in the state of Massachusetts can apply for a free e-library card. Once you have a Boston Public Library account, you can use your credentials to log in.
More reading about data visualization:
The Big Book of Dashboards
Visual Reporting and Analysis: Seeing is Knowing Whitepaper
Visual Analysis Best Practices: A Guidebook Whitepaper
Data Storytelling: Using visualization to share the human impact of numbers Whitepaper
Beautiful Evidence – Edward Tufte
Information Dashboard Design – Stephen Few
Information Visualization – Colin Ware
Starter Kit for Text Analysis
https://www.kenflerlage.com/2019/09/text-analysis.html
Search for “Digital Humanities” in Tableau Public
https://public.tableau.com/en-us/search/all/digital%20humanities
3 Easy Steps to Make Graphs
https://digitalhumanities.berkeley.edu/blog/17/06/26/3-easy-steps-make-graphstableau
Add Image of Google Maps and OpenStreetMap as Background Images in Tableau https://help.tableau.com/current/pro/desktop/en-gb/bkimages_maps.htm
U.S. Census Bureau Vizzes
In this tutorial, you will learn how to install Google Colab in your Google Drive, and use Colab to perform a number of data tasks including:
When creating 3D models from photographs, taking the photos is more than half the battle! Here are some tips and tricks to make your processing a breeze.
Having the right equipment will greatly speed up your processing time. Fortunately, in today's world nearly any camera is good enough to take your photos. A few things to keep in mind:
Use a camera with at least 5 MPix resolution
Avoid ultra-wide angle and fish-eye lenses if possible (it is possible to compensate for these effects, but better to not worry about it)
Fixed lenses are preferred, if possible, but using a zoom lens is fine as long as you stick with one focal length (that is, don't change the zoom levels during the photo-taking process)
If you are making models of objects, some bonus equipment may prove helpful, even if it is not required:
A lightbox may be used to spread even, diffuse light across the object being modeled, helping to avoid the effects of shadows (see Environment below)
A turntable will rotate an object so you can get pictures of all sides, in case you don't have the room to move around the object. The lightbox available in the digital studio has marks at around 10-degree intervals, allowing you to rotate evenly around the object and know where you began and ended.
A tripod keeps your camera steady at various angles, and will make the process of using a turntable smoother.
Each of these items is available in the O'Neill Digital Studio for you to reserve in the space.
The environment in which you take the photos is almost as important as the equipment you use. Here are a few things to think about, whether you are taking photos of a fixed feature outdoors or in a controlled, indoor environment:
Shadows are bad; even, diffuse light is good. If you are working outdoors, a cloudy day is best, though this is obviously not always possible. If you are working inside, using a lightbox (see above) can help to avoid unwanted shadows
A consistent background color can be helpful, particularly if you end up needing to mask your photos during processing. Again, a lightbox can help with this. Otherwise, it is good practice to position your object in front of a consistent and contrasting backgroun.
Consistency is key, whether indoors or out. You don't want any objects or people moving around in the background of your images. This can confuse the processing software and make it more difficult to align your photos.
For successful 3D modeling of objects with photogrammetry, overlapping images are key, as the object itself is reconstructed through matching pixel groups across photos. Indeed, you ultimately want images of each face of an object from multiple angles in order to make your processing as simple as possible.
2 important notes!
(1) DO NOT ZOOM during the photo-taking process; stick with one focal length and, instead of zooming, move your body forward or backward as needed.
(2) In order to get the best model, try to fill up as much of the camera frame as possible with the object. You are making a model of the object after all, not the rest of the world.
The images below suggest a basic process for taking photos of a small object. It combines two factors for each photo, the angle at which the photo is taken and the rotation of the turntable on which the object is sitting. It is recommended to take photos at multiple angles of each side of the object, rotate the object ~10 degrees, and repeat the process until you have taken photos of the entire object.
If using a tripod, it may be easier to take all your photos at one angle and then rotate again at the second angle after adjusting the tripod.
Fewer photos may be required for simple objects, but we do recommend at least 30 photos for any photo-model to ensure that there is enough overlap for a full model to process
Check out the Digital Scholarship Sketchfab collection for the final results of the BC eagle model, created using ~100 photos (around 36 photos at 3 angles).
In order to put your own 3D models into Apple's AR QuickView, they must be converted to a .usdz file from the more traditional .obj file. Two methods are outlined below:
the model you want to put into AR
to download Reality Composer, free from the app store for iPad or iPhone, or as part of the free XCode for Mac iOS on a Mac laptop/iMac (Apple Account Required)
either a Sketchfab account or to download Reality Converter (Mac iOS only, Apple Account Required)
Do you want to play around with converting models but don't have any of your own? Feel free to use our example model below or download one from Sketchfab or from our Digital Scholarship Collection (see Sketchfab instructions below)
For simple projects, using a combination of Sketchfab and Reality Composer on an iPad is recommended.
Whether you have created an original 3D digital object or generated a model from a real-world subject, many times the software you are using does not allow for a direct export to .usdz, the file format required by Apple for use in its AR toolkit. While there are many plugins and tools available that can make the conversion from different platforms, here we discuss two simple workflows using Apple's Reality Converter and the online 3D repository Sketchfab.
Sketchfab makes it very easy to convert your model; in fact, it is automatically converted when you upload the model to the site!
Note that Sketchfab is free for anyone willing to make their models publicly available to download with a Creative Commons license. If you wish to charge for downloads or keep them private, you must create a paid account.
Go to sketchfab.com and click the Sign Up button in the top-right of the page. Choose your username and confirm with an email and password.
Once signed in, click the orange Upload button in the top-right. This will bring up a page that lets you drag and drop a wide variety of files for upload. If you are uploading an .obj that includes a texture, the easiest way is to zip the .obj, .mtl, and .jpg files associated with the model together, and drop the entire zip file into the uploader.
Click Upload Files and wait. Depending on how large your model is, it will take some time for the processing to complete. While you wait, you can add a variety of metadata to make your model more findable, including placing it in categories and tagging it with keywords. At this point, it is also necessary to click Free under the Download section on the right side of the page, unless you have an upgraded account on the site.
Once the processing is done, the orange Publish button in the bottom-right corner will become available. Click it, and your model is now published!
Once published, you will be able to access your model's Object Page, which allows you and others to manipulate your model, see options for embedding it in another site, and see other model information. This is also where you can download your model as a .usdz. Just below your username beneath the model is a Download 3D Model button; click the button, and a variety of download types are available, including Augmented Reality Format (USDZ). Hit download and you're done!
Note: it can take a few minutes even after publication for your model to be available to download. If you select Edit Properties in the top right, you can both edit your metadata and, if your model is not yet ready for download, a yellow box will inform you that the model is still being prepared
Reality Converter is Apple's 3D conversion tool for creating .usdz files. Its process is quite easy as well and allows your models to remain private (if that is important to your project).
Download Reality Converter to your Mac (Apple ID required for download) or from the App Store to your iPad/iPhone.
Once downloaded and installed, open the program on your device.
As with Sketchfab, Reality Converter accepts a variety of files and opens with a Drop files here screen. Drop your .obj file to be converted here.
Your 3D model will convert and should appear in the window, yet it will probably only be a view of your mesh with no texture. To add the texture, select Materials in the top-right, click Base Color and then navigate to the texture file (usually a .jpg) associated with your .obj.
Your model should be looking better now! To save your file as a .usdz, simply go to File --> Export and choose where to save your converted model
Reality Composer is Apple's no-coding-required platform for creating basic AR experiences. Here we will talk about how to open your model using this app and create a simple experience that allows your model to appear in AR and when tapped.
Download Reality Composer, free from the app store for iPad or iPhone, or downloaded as part of the free XCode for Mac iOS (Apple Account Required)
After installing and opening Reality Composer, the first screen asks you to choose an anchor for your model. We won't go into all the details here (see the full guide for more), so simply choose a horizontal anchor so it will act as if your model is sitting on the table or floor.
A grid surface will appear with a little square box, along with a small placard sitting in front of it. Feel free to delete this sample object (or keep the placard if you want; if you click on it and scroll down in the properties pane on the right you can change its text to fit your model). To add your converted model, click the + sign in the middle of the top toolbar and click the Import button (Mac) or the large plus sign beneath the word import (iPad/iPhone). Navigate to your converted .usdz file and select it.
Your model should appear in the window! At this point, you may need to rotate, scale, or resize your model. Clicking on the model will open up the Properties pane, which allows you to change the scale. It also creates x-, y-, and z-axes around your object, allowing you to move it around your window. Clicking on the arrow header of each axis allows you to rotate the object along that axis. For now, simply make sure your object is rotated so that it makes sense for viewing.
Note you can also change the location and rotation of your object in the Properties pane. For more information on basic tools in Reality Composer, see the full tutorial.
Bonus: want your object to move about when viewed? Select the object and click the Behaviors button on the toolbar (looks like an exploding arrow pointing to the right). Click the + sign to add a new behavior and click Tap and Flip to automatically set up this behavior. See more about behaviors in the full tutorial!
In some cases with using models of real-world objects, the spatial "center" of a model may not line up with the object's center, and this may continue through the .usdz process. This is noticeable when, for example, you try to spin your object around its central axis. If this is the case, using Sketchfab to convert your file should reset the central axis to the center of the 3D object
Now you are ready to export and share your model in AR!
It's very easy to share your model from your iPhone or iPad. Simply click the ... button (triple dots) in the top-right corner of the toolbar and select Export. At this point, you can choose to share an entire project or just this scene. As we only have one scene right now, select Current Scene and then Export. Depending on what apps you have installed on your device, you can now share via Messages, Gmail, AirDrop, or a variety of other methods
On a Mac, go to File --> Export and again you have the option for Current Scene or whole project. Choose where to save, and it will save a .reality file to that location, which can then be shared in whatever way you wish. Reality files can be opened directly by iOs devices in AR or be imported into Reality Composer by whomever you share it with to be used in their own projects.
This catalog provides a list of different chart types with links to actual visualizations built in Tableau and published on Tableau Public. This was developed as a resource for the Tableau community for inspiration and to assist in the understanding of how these chart types might be used in actual use cases. All visualizations on this page are being provided with the permission of the original author and are available for download from Tableau Public. Click on the image to open the actual visualization in a separate browser window. (Note: inclusion does not mean the chart is the best choice for the data represented. Also note that the originator of each chart may not necessarily be represented; these are simply examples).
The best statistical graphic ever drawn“, is how statistician Edward Tufte described this chart in his authoritative work ‘The Visual Display of Quantitative Information’. The chart above also tells the story of a war: Napoleon’s Russian campaign of 1812. It was drawn half a century afterwards by Charles Joseph Minard, a French civil engineer who worked on dams, canals and bridges. He was 80 years old and long retired when, in 1861, he called on the innovative techniques he had invented for the purpose of displaying flows of people, in order to tell the tragic tale in a single image. This visualization shows 6 type of data (Number of Solider remaining, Army's direction, Geographic information, Distance, Dates, and Temperature) in 2 dimensions.
Here is the interactive version of this visualization on Tableau public:
https://public.tableau.com/en-us/gallery/recreating-charles-minards-napoleons-march
Explore this visualization: link
Here is an example that I created in Tableau:
More about his project:
Source: https://www.flerlagetwins.com/2017/12/geometric-art-in-tableau_17.html
Reality Composer (RC) for iOS, iPadOS, and macOS makes it easy to build, test, tune, and simulate AR experiences for iPhone or iPad. With live linking, you can rapidly move between Mac and iPhone or Mac and iPad to create stunning AR experiences, then export them to AR Quick Look or integrate them into apps with Xcode. This tutorial will introduce the basic functionality of the app and review the interface.
Please note that screenshots for this introduction were created using Reality Composer on a MacBook. The screen layout is slightly different when working directly on an iPad, but the steps are the same.
Reality Composer is free to download from the app store for iPad or iPhone, or downloaded as part of the free XCode for Mac iOS (Apple Account Required). Be aware that XCode is quite a larger program for developing applications, so may not be feasible for older MacBooks or those without much storage space. Running Reality Composer on a MacBook also requires you to send your project to an appropriate iPad/iPhone for testing (see below) or open it in an emulator using xCode.
Note that Reality Composer projects (i.e. what opens in Reality Composer) are saved as .rcprojects while sharable experiences that open directly onto an iPhone or iPad with the Quick Look tool are .reality files.
When you first open up a new Reality Composer project, your first decision to make is what kind of anchor you want for the project. The Anchor type will determine what requirements are needed to open your scene in AR.
There are 5 realtively-straightforward anchor types to chose from in Reality Composer:
Horizontal: will look for any horizontal surface (e.g. a table or the floor) to open your experience on
Vertical: will look for any vertical surface (e.g. a wall) to open your experience
Image: will look for a specific defined image that you determine (e.g. a painting or business card) to open your experience around
Face: will look for a person's face (as recognized by Apple's camera) to open your experience around
Object: will look for a previously scanned physical object (see details at the end of this workshop) to open your experience around
Once you have chosen an anchor, the main window of Reality Composer will open, which changes slightly based on the anchor type you have chosen (pictured above is the horizontal anchor view ). You can also change your anchor in the right window Properties pane (as well as change the overall physics of the scene, e.g. to allow things to fall through the floor. This can also be set object-by-object later).
The window opens with two objects already present: a cube and a little placard showing some text. Before checking those out, take moment to familiarize ourselves with the other options on the main toolbar. The most important ones for us will be the Add, the Behaviors, and the Properties buttons, but we can quickly review them all.
The Scenes option opens up a sidebar that allows you to name the "scene," or AR view, you are currently working in, as well as create new scenes that can be linked by certain actions (see below).
The Frame button just lets you navigate more easily around your scene by allowing you to zoom in on certain objects or on the scene as a whole.
The Snap tool will snap objects to other objects or to certain horizontal/vertical lines within the scene (similar to the Photoshop snap tool).
The Modify tool allows you to swap between adjusting an object's position/rotation and adjusting its length/width/size (this can also be done in the properties pane, as we will see).
The Space button swaps between "global" and "object-oriented" coordinates, allowing you to move an object along various axes within your scene.
The Add button adds more models to your scene, of lots of different types.
The Play button (also AR button when working on an iPad/iPhone) allows you to test our your scene on a device.
The Send to button (Mac only) allows you to send your scene directly to a linked iPad for testing.
The Info button checks for updates to your models/scenes.
The Behavior button allows you to assign different behaviors to your object based on specific prompts (e..g. if the object is tapped, it flies in the air).
The Properties button allows you to edit the properties of a specific model or of the scene as a whole.
Exploring and playing around with objects is a good way to learn, starting with what comes in our default scene.
Clicking on the cube will automatically open the Properties pane, allowing you to directly edit its various properties like width, height, color, etc. You can also name the object, transform it in space (if you have exact specs you want), and decide if it will obey the laws of physics (e.g. fall to the surface if it appears up in the air). You can also edit directly in the viewing window once an object is selected: arrows pointing along axes allow you to move an object, while clicking the tip of an arrow will allow you to rotate an object around that axis. Give it a try!
Clicking on the placard will open a similar pane, though this one also allows you to edit the text appearing on the sign. Each object you add will have its own individual properties that you can edit.
If you want to add a new object, just click the + Add button. You have the option of many pre-installed 3D objects to work with, as well as signs that can hold text and "frames" which can hold attached images. You can also introduce your own objects by following our tutorial on the subject.
I can drop a plane into our scene by clicking and dragging it or by double clicking.
You can make your scenes dynamic with actions and reactions by adding behaviors to your objects. I'm going to select the plane, and then click the Behaviors button on the toolbar, and a new pane will open up along the bottom of the page
Clicking the + button will open the new behavior pane, where there are several pre-set behaviors you can choose from; if you scroll down to the bottom, you can create a custom behavior. For now, I will choose Tap and Add Force so we can give our plane some movement.
The behavior pane has its first behavior! You can rename the behavior (currently called Behavior) in the left pane; the Trigger pane allows you to determine what action will make the behavior start (in this case, tapping), as well as which affected objects have the trigger attached (in this case, the plane, but you could add more).
Finally, the action sequence allows you to create a series of effects that happen, either simultaneously or one after another, when the behavior is triggered. In this case, we are going to have the plan start moving forward at a certain velocity
Our plane will move forward along one axis when triggered, but that's not really taking off. Adding a second force in an upward trajectory after a moment will make this takeoff look a bit more realistic. To add further actions to the sequence, simply tap the + button next to the words Action Sequence in the Behaviors window. This will then pop up different pre-coded behaviors you can choose from.
In the image above, I added two new behaviors, a Wait Behavior and a second Add Force behavior in a 45 degree upward angle. Importantly, I directly attached the Wait behavior to the first Add Force behavior (with a drag and drop) which means that these two actions will begin simultaneously, and the second Add Force will not start until the first set is complete. This means our plane will move forward a certain amount before briefly "taking off".
Now that we have an experience, we need to test it out to see if it functions correctly. There are a few ways to do this:
If you are working on an iPad or iPhone, it's easy. Just tap the AR button on the top toolbar to open the experience in AR, and then the Play button to start the experience from the beginning.
If you are working on a Mac, it's a bit more difficult. On the one hand, if you hit the Play button on the top toolbar, the experience will start, but will obviously not be in AR, making testing a little bit difficult (though you could still test the functionality in the building screen, as pictured below).
There are other options for testing from a Mac, however. If you have an iPhone or iPad handy that has Reality Composer installed, you could connect it via a USB/USB-C cable to your computer. If you then open Reality Composer on both devices and hit the Send To button on your Mac, the file will open in Reality Composer and be editable/testable on your iPad!
Note that if you are using specially imported models, they may not be available on your second device, unless they have been imported there as well.
Another option is to export the file and share it as a .reality file. To do this, go to File --> Export, and pick either to export the entire project or just the current scene. After saving it on your computer, you can navigate to that folder, select the .reality file, again go to File --> Share in the Finder menu, and choose how you want to share the file (text, AirDrop, etc) to your iPad or iPhone. Opening it on your iPhone or iPad in this way does not require Reality Composer, as it is using the built-in Apple Quick Look platform (you can also share your experience with other people in this way!).
We are going to add one last behavior that will make our action "replayable": returning the plane to its original location after a certain amount of time. Otherwise, once we tap the plane the first time, it's gone.
This can be done by adding one more action to our action sequence, a Move, Rotate, Scale To action that will move our object back home. Adding this action to the end of our action list, and then selecting the plane as our affected object, will allow you to choose where you want the plane to return to. In this case, I will adjust it so it ends up back where it started (by moving the plane in the image back to the left to the starting place). Also note that I added one more Wait action so that the plane will wait one second after it stops being impacted by the force to return home.
And that's it! Now the plane will return to its original location. Project testing videos and the files created with this tutorial can be found below. There is obviously a ton of other behaviors and models to play with, so give it a try!
As a bonus, you could use the Hide and Show behaviors to make the plane seem to magically "appear" back in its home location at a certain moment. See if you can make it work!
The Tableau COVID-19 Data Hub contains resources to help people visualize and analyze the most recent data on the coronavirus outbreak.
In this tutorial, we will learn:
How to connect data to Tableau
How to create worksheets
How to create an interactive dashboard
How to save and publish your visualization
In this tutorial, we will work the COVID-19 data from the European Centre for Disease Prevention and Control website.
Data preview in Excel:
The data fields are described below:
Startup Tableau desktop, you will get the start page showing various data sources. Under the “Connect” on the left side of the screen, you have options to connect to a file or server data source. Under to a File, choose Text file. Then navigate to the CSV file you just download from the last step:
At the bottom of the Tableau desktop, click on a sheet (sheet 1) and you will see the following screen:
Tableau automatically separates the data into Dimensions and Measures. Dimensions are the categorical fields. Measures are the quantitative fields, such as death count, positive cases count
We will create a bar chart tracking number of new positive cases per day. Drag “People Positive New Cases” from Measures and drop it into the “Rows” section. Drag" Report Date" from Dimensions to "Columns". Select “Days” from the shortcut menu.
Note that it defaults to YEAR(Date). To format how the date is displayed, right-click on YEAR(Date) and select Day, specifically the option that has the example "8th May, 2015".
We can also add some filters to the bar chart so that a user could filter to see a certain country or date range. From Dimensions, drag Country Short Name to the Filters shelf. Click on All and then OK.
Next, right-click on the Country Short Name on the Filters shelf and select Show Filter. Now you will see a list of countries on the right.
Double click on Sheet 1, rename the worksheet title to "New Positive Cases"
Next, we will create a map. First, open a new workbook, double click "Country Short Name", Tableau has placed the longitude and latitude coordinates in the columns and rows, respectively, a point map will show on the design canvas.
Drag and drop "People Positive Cases Count" to Size under Marks.
Double click on Sheet 2, rename the worksheet title to "New Positive Cases by Country"
Now we will create the third worksheet, open a new worksheet.
We’ll create a vertical bar chart tracking confirmed case per country.
Drag “People Positive Cases Count” from measures and drop it into the “Columns” section. Drag Country Short Name from Dimensions to "Measures".
Go to the bottom of the chart and click on the sorting icon. Sort the number of positive cases in descending order.
Go to Marks, you customize the chart color by using the color pilates.
Name the worksheet “Positive Cases by Country.”
Let’s create a dashboard to pull all of these visualizations together. The dashboard will combine our three visuals we have made in the previous steps. Click on the new dashboard icon at the bottom of the page to create a new dashboard.
Our three worksheets are on the left. Drag “worksheet 1" into the drop area on the right.
Combine 3 worksheets:
In this tutorial, we will look at a few advanced graphs that go beyond the show me feature in Tableau.
Tableau is a great tool for data analysis and visualization. It has some powerful tools to make the visualizations appealing and interactive. The Show Me feature can be used to apply a required view to the existing data in the worksheet. Those views can be a pie chart, a line graph, a scatter plot, or a simple map. Whenever a worksheet with data is created, it is available in the top right corner as shown in the following figure. Some of the view options will be greyed out depending on the nature of selection in the data pane.
One such awesome feature is animated data visualization.
At the top of the page you can see a toolbar where you can find different options for creating a report.
Pages, names
First of all, your report needs a title. You can simply name it at the top left corner and the changes will be automatically saved. Then you can add multiple pages to the report which you can also name and easily switch between them.
Types of data visualization
The next part of the toolbar is the data visualization tools. You can select the graph type you want to build.
For example, a bar chart requires you to select one dimension and at least one metric. If you add multiple metrics, you will see multiple bars for each category.
Inserting text and images
You can insert text, images, rectangles, and circles to the report depending on your purpose.
Layout and theme give you the possibility to play with the style of your report. You can change the background, colors, text styles, and display options and create a unique style that can represent the style of your company.
Styling and controlling menu is a great help in the process of creating a report. You can select the chart type, experiment with metrics and dimensions, change data sources, apply a filter, etc.Now let’s switch between view and edit mode to see how our sample report looks like:
Sharing the visualization
You can do a few things with the share feature. You can collaborate on the visualization with your team or clients. Depending on their level of access, they can view or edit the reports through an invitation or a shared link.
The visualizations also can be shared by embedding them into online and offline content, from websites and blog posts to annual reports.
Step 1: Navigate to
Step 2: Click on "use it for free button
Step 3: Sign in with your Google account and password. You should now see the home page of the Data Studio
Step 4: click on 'Create' button on the top left:
Step 5: Click on 'data source'
Step 6: Find and Click on Google sheet connector
Step 7: Find and Click on the Google sheet
Step 8: Once you select the worksheet, there are still some decisions to make. You have three options:
Use the first row as headers. Selected by default. Does what it says on the tin.
Include hidden and filtered cells. Selected by default. If you want to keep data out of Data Studio by hiding its columns in Google Sheets, deselect this.
An optional range of cells containing your data. Data Studio looks at the entire worksheet by default. If your table lies in a certain range, specify that here.
Step 9: Click “Connect” to give Data Studio access to the Sheet:
Step 10: You should now see a screen like the one below. Next we need to help Data Studio understand what kind of data you’ve given it.
Step 11: Google Data Studio doesn't always determine the correct data type for each field. so you need to make sure that each field is of the correct data type.
Step 12: Create calculated fields (optional)
Data Studio allows you to add custom fields to a data sources. So instead of adding all sorts of columns and formulas to your sheet, you can add them to your data source. That’s great because you can add calculated fields on the fly, without modifying your source data. Your new fields will be accessible in any reports that use this data source.
Step 13: Once everything looks right then click on the "create report" button, this will create an empty report with your Google Sheets data source connected to it:
Step 14: Charts in this report will use your new data source by default. Now you’ve got a data source and report to work with. The rest is up to you!
is an excellent tool for quickly and easily creating basic storymaps, which combine spatial data with text and various multimedia. One feature that makes it particularly useful is the ability to import high-resolution imagery as tiles from sites like the or the . However, when importing these maps, Knightlab StoryMap automatically sets the maximum zoom level to what it thinks is "appropriate" based on the size of the map. This does not allow the user or the creator to take advantage of such high-resolution images.
The solution has three steps:
To create your own high-resolution map tiles
To host your tiles on Github
To import your tiles as a Gigapixel map into Knightlab
There are two ways to go about creating your own tiles. If you have access to Adobe Photoshop, the process is quick and easy.
Download your image as a .tiff/.geotiff file from a site like David Rumsey or the NYPL, or use your own image.
Open your image in Adobe Photoshop.
Write down the size of your image in pixels. This can be seen by going to Image-->Image Size in the main toolbar. The width and height should then be displayed; make sure your units are in pixels! You'll need this information for adding your tiles to your storymap later.
It's time to Zoomify your map! Just go to File --> Export--->Zoomify and the Zoomify export menu will pop up. Here you can set the export folder where your tiles will be sent to, as well as the Image Quality. For high-res images, you might as well use the highest quality possibly. Once you are ready, hit Ok. A web browser may pop up, but you can close it. If you navigate to the location where you saved your files, you should now see a series of folders containing your tiles!
If you do not have access to Photoshop, don't despair! Zoomify has a free tool you can download which will perform the same tiling as photoshop does.
Once downloaded, open the Zoomify Free Converter application on your computer
After downloading your high-resolution image, drag and drop the downloaded file onto the converter (or use File-->Open). The tiling will begin automatically and will save to the folder where your image is located. And you are done!
To tile your image using OSgeo4w, follow these steps:
Save your georeferenced map to a known location on your computer by exporting it from ArcGIS, QGIS, or MapWarper (see our tutorial for georeferencing maps in MapWarper).
Open up the Osgeo4w command line from the start menu
Navigate the command line to the folder where your raster image is stored using the cd ('change directory') command (e.g. cd c:\temp\saved\testImageFolder)
Type “gdal2tiles -geodetic imageToBeTiled.tif FolderName” where imageToBeTiled.tif is your exported georeferenced image and FolderName is the Folder you want to be created to hold the tiles
To add control for what zoom levels are created add: -z 2-15 after -geodetic. This example would create tiles for zoom levels 2 through 15
Your image is now tiled! A variety of test maps are automatically created in your folder holding the tiles which you can drag and drop into a browser. In some cases, tiling a .jpg may work better than tiling a .tiff, especially if you have self-georeferenced the image.
The free version of Maptiler restricts the size of your image in pixels (10k x 10k) and file size (30 MB), as well as creates a watermark on your tiles. It also does not let you control the zoom levels which are created, meaning that smaller maps (e.g. portions of a city) cannot be tiled appropriately. It also requires spatial data for the map, meaning that you must georeference your image either within the platform or on software like ArcGIS or QGIS.
Yet, it is quite user-friendly otherwise. Simply drop your image into the window after opening...
...and you will be asked about spatial data. If you previously georeferenced your image in ArcGIS, it will automatically detect the spatial data; otherwise, you will need to match up your map with a full-world map using control points. This is, however, quite easy for larger maps, just click the location on your image, then on the world map, and those locations will be linked together. It is recommended to choose at least 4 points scattered across your map.
Once you have added enough points and hit "Continue," you will be asked how you want to export your map. For our purposes, a "Folder with tiles" is fine (MBtiles is for Mapbox, OGC GeoPackage is for SQL databases, neither of which we are concerned with here.
Once you choose your folder, your image tiles will be exported!
Sign up to create an account on Github.
Start a new project either by pressing the “Start a project” button or, if you already have a Github account, clicking on the plus sign in the top right and selecting “New repository.” Give it an identifying name and make sure it is set to "Public"
Once your repository is open in Github Desktop, you can open the folder in your Finder/Explorer window by clicking the "Show in Explorer" button on the right side of the screen. This folder is often located in a folder named "Github" in the Documents section of Explorer in Windows. Once you’ve located the folder on your computer, move all the files and folders generated by Zoomify into your repository folder.
Once your files are copied, return to the Github Desktop application. You should now see a series of files that have been changed in your repository, all the files you just copied over! In the “Summary” field, type a descriptive message of your content. Then click “Commit to Main” to upload your files. This process may take a few minutes depending on the size of your files. Finally, when a blue button appears, click "Push Origin" to push your files to the online version of the repository.
Once your sync has completed, return to your online github repository and you should see your tile folders. The final step is to make these files publicly available with a URL. Navigate to the "Settings" tab and scroll down to the “Github Pages” section. Create a page for your project by selecting the source branch (main) and clicking “Save.” After the page refreshes, scroll back down to the Github Pages section and you will find the url you can use to create your Gigapixel (Note: clicking on this link directly will take you to an error page saying that Github can’t find what you’re looking for. This is normal.).
Now you have made your tiles and hosted them on Github; the last step is to load them into Knightlab. (Note: this tutorial assumes you are familiar with StoryMap JS; if you are not, see our tutorial on StoryMap JS here)
From the Options menu, where you select your background map image, change the Map Type to Gigapixel, insert the URL for your github tiles into the Zoomify URL box, and finally, under Max Image Size, insert the width and height of the original image in pixels. This can be found by opening the file in Photoshop (see above) or by looking at the Details tab under the Properties of the original image.
Make sure "Image" is chosen under the "Treat As" option, as we are using a map that does not have any real spatial data attached
And you are done! You should now be able to zoom in much closer into your high-resolution image on each slide of your storymap.
Create New report on Airbnb Boston reviews
To create a new report from scratch, a portion of the has been uploaded into a to be used as data source for Google Data Studio.
the full Kaggle dataset of the Airbnb reviews in Boston is available at
the Google Sheets, with approximately 10k reviews, to be used as data source is available at
Spend some time to understand the data by reading their description on Kaggle and looking at the table on Google Sheets.
The data-source table has been created by joining the “Listings” and “Reviews” original tables provided by Kaggle, and exporting the first 10k joined rows sorted by ascending “listing_id”.
Create a new report
Go to the Data Studio home page
Click on “Start a new report” (Blank)
Rename the “Untitled Report” with a name of your choice by clicking on the name itself
Create a new data source by clicking on the blue button on the bottom right, or select the Airbnb data source if it is already present in the right-pane list
Connect to the Google Sheet data source by using its URL:
Choose the “Google Sheets” connector in the list of connectors on the left
Choose the “URL” option in the first column
Choose the “Reviews Query DW” worksheet in the next column
Tick the option to “use the first row as headers” if it is not ticked yet
Click on the “Connect” button to execute the connection to the data source
Dimensions, metrics, and transformations
Check the type and aggregation of each field and that all the fields are correctly interpreted as either dimension or metric.
CONCAT(latitude, CONCAT(', ', longitude)) → to generate a (lat, long) field useful for map charts; before generating this new field, set “Aggregation=None” for latitude and longitude fields, so that they become dimensions (by default, Data Studio considers them as metrics)
After creating new fields and updating the existing ones, click on “Add to report”
Analyze the data
Analyze the data by building the following visualizations. Then, explore and create new visualizations to find interesting insights on your own.
Analysis (1): Number of Records over time
Analysis (2):
analyzing the number of different reviewers for each (lat, long) locationnote that the Kaggle dataset of the Airbnb reviews is in Boston, Massachusetts, US
Allow end-users to filter the data under analysis by selecting a date range and city name.
All the green fields represent dimensions (fields that can be counted) and all the blue fields represent metrics, usually categorical data. See the documentation for .
Go to and download their free tool for Windows or Mac
OSGeo4W is a distribution of a broad set of open source geospatial software for Windows environments (Windows 10 down to XP). OSGeo4W includes , , as well as (over 150).
Install Osgeo4w from the
has a free version that you can download for both Windows and Mac. On the plus side, it works with Macs unlike some other tools; on the downside, it has its own restrictions.
So you have your tiles; now you need to make them publicly accessible online so your storymap can access them! To do this, we want to use the free hosting site . We will be hosting the tiles created through Zoomify Free here.
If you’re new to Github, download , which lets you upload many files at once. Once installed, go to the “Code” button in your online depository and click “Open in Desktop.” Allow the site to open Desktop for you, and click "Clone". This will create a copy of your repository on your computer.
Go to and create a new StoryMap
Tutorial:
Tutorial:
Paste the Airbnb-data Google Sheet URL in the specific field:
Create new useful fields (dimensions or metrics) from the existing ones by exploiting formulas, such as in the following (click on the “+” and “fx” placeholders). For details on this step, see:
API: Application Programming Interface “An API is a set of definitions and protocols for building and integrating application software.”
REST API: “Representational State Transfer API”
API is basically a set of functions and procedures that allow one application to access the features of other applications, REST is an architectural style for networking applications on the web. It is limited to client-server based applications. REST is a set of rules or guidelines to build a web API. There are many ways to build a web API, and REST is a standard way that will help in building it faster and also for third-parties to understand it better.
Use one candidate ID from a sheet tab to retrieve name and party, and office information through an API call, and output results to another sheet tab.
API call use “candidate” in OpenFEC Developer page, and the first API builder “/candidate/{candidate_id}/”
Input the value (candidate id) from A2 in sheet tab "person"
Make the API call and retrieve data of candidate's name, party and office.
Output the data to sheet tab "live".
Use a list of candidate IDs from a range to retrieve name and party, and office information through an API call, and output results to another sheet tab.
API call use “candidate” in OpenFEC Developer page, and the first API builder “/candidate/{candidate_id}/”
Input the value (candidate id) from A1: A20 in sheet tab "id_list"
Iterate the ID list to make the API call, and retrieve data of candidate's name, party and office.
Output the data of 20 candidates to sheet tab "result".
Map Warper is an open source map warper/map georectifier, and image georeferencer tool developed, hosted, and maintained by Tim Waters.
In Map Warper, it is possible to browse and download maps others have uploaded into Map Warper without an account. To georectify your own map, however, you must make one. This also allows you to easily return to your maps later.
All you need to create an account is an active email address. It may also be linked to an active Facebook or Github account.
On the top right corner of the page, click "Create Account"
Select a username and password and enter an active email address.
Click "Sign up"! You should quickly receive an email to confirm your account
Now that you are logged in, you can upload your own images to the Map Warper server in order to georeference them.
By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points. As a freely available tool, you should not expect Map Warper to store your map indefinitely; once it has been georeferenced, you should plan on storing your georeferenced map on your local hard drive or a file storage platform like GoogleDrive.
Clicking “Upload Map” on the main toolbar (note if you are not yet logged in, it will ask you to do so at this point)
Insert any available metadata and a description of the map. This is useful both for your own records and for anyone else searching for similar maps on the Map Warper server.
At the bottom of the page, choose to Upload an Image File from your local computer or from a URL. Once the file has been selected, click "Create"
Once the upload is complete, a new page will appear informing you that the map was successfully created as well as providing an image of the uploaded map.
Now the map is on the platform, but it does not yet have any spatial information associated with it. The next step is to use what are called "control points" to place your map in a “real-world” coordinate system where it can interact with other types of spatial data.
Note that you can also edit the original metadata fields, crop out unwanted portions of your map, and see a history of the interactions with the map at this point from the main toolbar
Once your image is displayed, select "Rectify" on the main toolbar.
This opens up the Georectifying page, the most important page in this tutorial. It is composed of two windows, one showing your map and one showing a “basemap” which you will be using to geolocate your map.
In the top right corner of each map there are a series of buttons that help you navigate the map and add control points
The goal here is to create what are called, “control points,” or points that are corresponding between your uploaded map and the basemap. This is done by simply zooming in on each map in turn and creating a control point as close to the same point as possible in each map.
The last two icons appear only on the basemap and are used to adjust it as needed to help with georeferencing
Navigate on your map to an easily identifiable location. In this example, I have chosen the tip of the island in the middle of Paris that the Notre Dame Cathedral is on. Note that an external mouse with a scroll wheel can make the zooming/moving process easier; zoom and pan buttons are also provided in each window.
Click the “Add Control Point” icon, then click again on your map in the desired location. A little control point should pop up!
Swap to the basemap and click the “Pan” tool (the hand) to find the proper location, then again select the “Add Control Point” tool and click on the corresponding point on the Basemap.
Once you have created a control point on each map, scroll down and click the “Add Control Point.” This will add the control point coordinates to a list of points below, which you can see by clicking the words “Control Points."
You will need at least 3 control points to geolocate your map, but more is preferrable. It is also advisable to spread your points across the map rather than have them clustered; this will ensure that the map is georeferenced equally across the map rather than only in one area. If you need to delete a point, this option is available from the "Control Points" table
Remember: places change over time! Try to use features that remain as consistent as possible on both maps. In general, the more control points you add, the more accurate your map will be.
After you add the 4th control point, your table of points will start including error information, as the points are triangulated against one another. Note that this error may not mean that you are doing anything wrong, particularly in an older map that is not as spatially accurate as something more modern! On the other hand, if your error is quite high and you believe your map is relatively accurate, you may have misplaced a control point somewhere. Usually, high error is caused by a single point being misreferenced.
When you feel like you have enough points scattered around your map, we are ready to georectify the map! Remember you can always come back later and add new points or remove old ones if you feel like the result is not to your liking. To georectify your map, just click “Warp Image!” at the bottom of the page and you’ll get a notice that your rectifier is working.
When the map is finished rectifying, you will get a notification that rectification is complete. Now, you should be able to see your map overlaid on the Basemap, as well as be able to turn it on and off or more or less opaque to check for accuracy!
If the map is to your liking, you are ready to export. Map Warper offers a variety of ways to export your map depending on your needs
To export your map, Select the Export tab on the toolbar. A window like that seen below will pop up, giving you a variety of choices for exporting
Some exporting options:
GeoTiff:
public domain standard; easily imported into a wide variety of spatial platforms like ArcGIS or QGIS; good for backing up your georeferenced map on your local computer or in cloud storage like Google Drive
.kml:
Easy import directly into GoogleEarth
Tiles (Google/OSM scheme)
Useful for loading into tools like ArcGIS online and Knightlab StoryMap JS. Remember to ensure a backup of your files elsewhere though in case your map is eventually removed from Map Warper.
Adding your tiles to an ArcGIS online map can be complicated. From an empty map, choose Add --> Add Layer from Web and then select a "Tile Layer". Where it says “URL” copy over the Tiles (Google/OSM scheme) URL from your Map Warper file. It will look something like: https://mapwarper.net/maps/tile/49503/{z}/{x}/{y}.png.
However, note that the end of the URL should look like “{level}/{col}/{row}.jpg” according to the instructions given. Replace the {z}/{x}/{y}.png at the end of your URL with this ending, creating something that looks like: https://mapwarper.net/maps/tile/49503/{level}/{col}/{row}.jpg. It should now load properly into ArcGIS online
This tutorial goes along with the tutorials Tiling High-Resolution Images for Knightlab StoryMapJS.
So, you have some tiles of a high-resolution image, now you need to allow other people to see those high-resolution images in a zoomable viewer. While the previous tutorial (linked above) allowed you to import your image into KnightLab StoryMap JS, sometimes you want to display the image on your personal site. This tutorial will show you how to create an iFrame embed that can be used on sites like CampusPress or Omeka, with your tiles hosted in GitHub (as with the StoryMap example).
Here we will review 3 tools that can help you achieve this goal: Zoomable, Zoomify, and Leaflet
Zoomable (https://zoomable.ca) allows users to quickly create zoomable images, but it is free version is limited (5 uses per email). Various plugins for sites like WordPress and Knightlab do exist but must be purchased individually. The free version also does not allow you to host your own images, so you are beholden to their servers functioning. However, it is a very quick process if you only have a few images.
Go to the free Zoomable upload page (https://srv2.zoomable.ca/upload_new.php)
Upload your image (30MB max; supports JPG/PNG/TIF) and enter your email address
An email containing your zoomable image will be sent to you; once it has, click on the included link (Example from San Francisco Chinatown Image)
In the top-right corner is the "embed image" button. Clicking this button will give you an iFrame embed code that you can copy and paste into your website html (for information on iFrame html code, see this page for examples)
Pros: Very fast and very little coding required (only iFrame embed); don't need to tile the images yourself
Cons: Minimal free service, don't control your own tiles
Check out our example CampusPress page for what this embed looks like
Our old friend Zoomify (last seen in the previous tutorial where we used Zoomify Free to tile an image) also provides us the basic information we need to host our image on a site like GitHub. When you download Zoomify free, it comes with a variety of other files, including a javascript file (ZoomifyImageViewerFree-min.js) and an HTML file (TemplateWebPage.htm). The files are linked below for your reference.
In order to visualize our image using the Zoomify web viewer, we need to create a GitHub repository that contains (1) our tiles, (2) the javascript file linked above, and (3) an HTML file specific for our image.
First, we need a GitHub repository to put our image. If you are unfamiliar with GitHub, the previous tutorial walks you step by step on how to create an account, create a repository, and download Github Desktop to make everything easier. It also shows you how to upload your tiles, which we will do again here.
For this tutorial, I've created a Tiling and Mapping GitHub repository and linked it to my Github Desktop. I will now copy into it the tiles created in the previous tutorial and the Javascript file linked above.
Now we need to create an HTML file that will contain our zoomable image. Luckily, this is easier than it might sound because Zoomify has already done most of the work for us! First, open up a blank text document to create your file. I reccomend using Atom (any iOs) or Notepad++ (PC only) as these text editors are helpful for coding. I'm going to save mine as zoomifySanFran.html in my repository folder (make sure you save it as an html file and not a text file!)
Copy the following code into your html file, which is directly copied from the example html file supplied by Zoomify and save your file.
In order to load your file into the viewer, you only need to change 1 line of code! In line 8, where it says ZoomifyImageExample, change the name of the folder to the folder containing your tiles; in my case, that is sanFran (see above), so the code line 8 ends up looking like:
Save your HTML file, and sync it with GitHub as described in the previous tutorial (Commit to Main-->Push Origin in your Github Desktop window). Once synced, your GitHub repository will look something like this:
Now if you have published your page in the settings tab, your image should now be viewable in the viewer using the HTML page for your site + the name of the webpage you created; in my case, this is: https://bcdigschol.github.io/tilingMappingWorkshop/zoomifySanFran
Once it is hosted on GitHub, you can embed it as an iFrame on any website you want, with the syntax being as follows (where you can replace the URL with your own file URL, and the title with your own title)
To see an example iFrame embed, see our Test CampusPress Page.
And that's it! You can use the same repository for all of your images if you wish, just make a separate .html file that you can use for your iFrame embed!
Leaflet is the leading open-source JavaScript library for mobile-friendly interactive maps. Weighing just about 39 KB of JS, it has all the mapping features most developers ever need.
For more details about setting up an interactive leaflet map and adding spatial data, check out our tutorial on the subject. Here we will just be using the viewer to view a tiled image.
Let's return to our depository, but this time we will download the leaflet javascript package as well as a plugin for viewing tiled images
Download Leafletjs and the Zoomify leaflet plugin (Code --> Download .zip file). Drop them into your repository folder with your tiled images.
There are lots of other leaflet plugins for viewing tiled images and doing lots of other stuff as well. Check out the Leaflet Plugin Page
Like the previous example, this plugin comes with a nice example script for your html. You can find it in the downloaded folder (example.html) or can copy it from below. As before, you want to create a new html file. I called mine "leafletZoomifySanFran.html"
Now, there are a few changes we want to make to alter the code for our needs.
On lines 5, you can change the Title of the page to whatever is appropriate for your image
On lines 6 and 19, you should change the URL to point to the local version of leaflet you have downloaded into your repository. This means it will look something like "./leaflet/leaflet.css" and "./leaflet/leaflet-src.js" depending on the name of your leaflet directory inside your repository
On line 20, make sure "src" points to the location of the downloaded Leaflet Zoomify Plugin. In my case, I changed it to "./Leaflet.Zoomify-master/L.TileLayer.Zoomify.js"
On line 30, you need to change the URL to your github repository folder containing the tiled images. In my case, this was "https://bcdigschol.github.io/tilingMappingWorkshop/sanFran/{g}/{z}-{x}-{y}.jpg"
Finally, on lines 31, 32, and 33, change the information to the width, height, and attribution of your image, which can be found in the Properties or Info of your image.
The final version of my script is as follows:
Sync your repository and then your image should be visible on the web (https://bcdigschol.github.io/tilingMappingWorkshop/leafletZoomifySanFran.html)
And the iFrame works the same way as previously!
See the results on our embed test page!
That's all for now. Please contact Matt Naglak, Digital Scholarship Librarian (naglak@bc.edu) if you have questions or come across any errors. And check out our other tutorials!
API - Application Programming Interface. REST API - Representational State Transfer API. SERVER: the place has the resources. Client call (API call) -> Server - Data back over HTTP protocol. API Key - interface to identify its user, developer or calling program to a website, which is normally used to assist in tracking and controlling how the interface is being utilized.
Post: Create
Get: Read
Put: Update
Delete: Delete
/Resource/ {Required parameter} (Optional parameter) API Key
The following covers setting up your map, adding data, and popups.
Leaflet is the leading open-source JavaScript library for mobile-friendly interactive maps. Weighing just about 39 KB of JS, it has all the mapping features most developers ever need.
Leaflet is designed with simplicity, performance, and usability in mind. It works efficiently across all major desktop and mobile platforms, can be extended with lots of plugins, has a beautiful, easy to use and well-documented API, and a simple, readable source code that is a joy to contribute to.
Leaflet offers much more flexibility for story-mapping than many other tools used for such projects; on the other hand, it does demand a basic knowledge of coding in JavaScript.
Here, we walk through the basic steps of setting up a Leaflet map, adding spatial data, and creating popup boxes that go along with that data. Though there are many ways to organize your map files, here we will first create a basic HTML file that will hold your map and then a second JavaScript file that will contain the code for the map and spatial data. To download all the tutorial files discussed in this lesson, click below or visit our GitHub page!
Also, see Leaflet's own Quick Start Guide for an introductory tutorial
Your map needs to be situated on an HTML page in order for it to function. Luckily, it only takes a few steps to get this up and running.
Create an empty folder to hold your map files, and then in the folder new text file in a text editor and save it as index.html. I recommend using Notepad++ (Windows only) or Atom, as your text editor, as these are designed for coding, rather than regular Notepad or Word. At the top of your file, put the following introductory code to set up your HTML and name your map. Each line is commented to let you know how it is functioning.
Next, you need to tell your HTML to access the Leaflet JavaScript library and its associated CSS stylesheet. You have two options for this, as you can either host the files locally on your computer or call them from their already-hosted location on the web.
To call them from the web, simply copy these lines into the <head> section of your HTML
To host the files locally, download and unzip Leaflet, placing the entire Leaflet folder into the folder with your HTML file. Now, you simply need to change the "href=" and the "src=" links from the above code to the current location of your file. It should end up looking something like this. You can also close your header at this point.
Any Leaflet plugins that you want to use can be accessed the same way! Many are hosted online, or you can download them and host them locally. In either case, you will call links to their .css and .js files in the HTML header!
Finally, you need to create the container that will hold your map and set its size on the html page, as well as call the JavaScript file that will be your map itself. This will take place in the <body> of your code. And that's it, your html file is done for now!
The Javascript file name associated with your map can be anything you'd like, but we recommend tying it to the name of your map for easy association. Remember, capitalization is VERY important to take note of in coding, and no spaces, please!
So you've got an HTML file with a map container, but that container is currently totally empty. In this step, we will walk through how to load up an externally hosted world map and show you how to open the HTML file on your own computer. It only takes a few lines of code!
First, create a new empty text file and save it as a javascript file, making sure to use the same file name that you referred to in the HTML file. The first step is to define your map options and create the map object itself. There are a variety of default mapOptions you can set; here we simply choose the center of the map, its initial zoom, and its max zoom level.
Trying to find the coordinates for a specific location? Google Maps makes it easy; just right click and select "What's here?", and the coordinates of that location will appear.
Now you need to load a background map. While it is possible to load a locally hosted tiled map as a background map (which will be discussed in a different tutorial), here we will simply use one of the maps hosted online already by ArcGIS online. Notice that the basemap variable is made up of a few parts: a variable name, a URL for where the layer is located, and attribution information. Other layer options are also possible here, similar to the map options above.
The last line of this code is key; for each variable, you must add it to your named map variable (in this case the "map" defined above) in order for it to appear on your map.
Note that there are several providers of free and open source tiled background maps online, some of which do require you to register. See this excellent previewer for more information and for URLs for various maps.
Now it's time to open your map! This is simple enough: just drag your index.html file into a browser window, and your map should appear, centered on your favorite university library!
Now, we have some background imagery, but what about adding other spatial data? There are lots of ways to add spatial data to your map, depending on what form you want it to take. Here we will talk about two methods: adding points individually and importing points from an excel file or a CSV (comma-separated list) exported from excel.
Adding point data individually is as easy as one line of code! Just give your point variable a name (here bcLibrary), add the coordinates, and add it to your map!
Adding a polygon is just as easy! You just need the coordinates for its corners, like this one which is roughly situated around the library.
Now open your map again, and it should look similar to that below!
Want to stylize your point marker in a unique way? Check out the linked tutorial for more info!
Sometimes you already have your spatial data in an external location (either in an excel file or on another platform) and you want to import it fully formed into your map. In order to do this, you need to convert your data into a geoJSON format, a standard spatial format for data that is able to be easily imported into Leaflet.
From software like GoogleEarth, you first want to export your data as a .kml file. From here, there are many free online platforms like MyGeodataConverter that can convert the files to GeoJSONs for you!
Finally, you can convert your data even if it is in a simple excel file. The key here is to export your file as a .csv. Then, similar to .kml files, there are opensource tools online that will convert it for you.
Remember! To convert your excel files make sure you have individual columns with the Latitude and Longitude data for each point of interest! This is vital for converting your point data to the needed form with its spatial data.
Great! Now you've got your .geojson file; it's time to get it into your map.
First, make sure a copy of your .geojson file is in the folder with your HTML and map javascript files.
Now, there are 2 different ways forward here:
You can simply copy and paste the entire .geojson into your code, if desired (see an example here). Simply set it equal to a variable and you are good to go. On the downside, this approach both clutters up your code, potentially making it very long, and makes it more difficult to find and change any desired attributes
A second approach is to enter the GeoJSON file itself, set the data equal to a variable (same as was done above), and then resave the file as a .js Javascript file. If this approach is taken, you must also call the javascript file in the header of your .html page, just as you called the Leaflet javascript file. In doing this, you make your variable accessible within your map. The below spatial data was created in google earth, exported as a .kml file, converted to a .geojson using the method described above for KML files, and then saved as a .js Javascript file after adding the variable name. Each file in this process can be seen in the files associated with this tutorial
Now, your external data is linked within your map's code, but it still needs to be added to the map! Whichever of the above two methods you used, the following line of code will add the data to your map
A third bonus method for importing your .geojson! You can simply leave the geoJSON file as it is and use a plugin to bring your data in. See the AJAX geoJSON plugin for more information. Using this method means that some standard methods for manipulating the data within your map will not be available, however.
Some browsers will not let you view GeoJSON files locally in a browser window, due to CORS security issues. See here for instructions on how to enable CORS in Firefox and IE.
So now you have some points, lines, or polygons on your map, but what do they mean? For example, in Google Earth, we added names and descriptions to our points, but that information is not yet visible in our Leaflet map. Popup boxes attached to your data can offer more information to those using your maps. And making them is quite simple!
Let's look back at the first marker we used to indicate the O'Neill Library (named bcLibrary). To create a popup box for this, we simply use one line of code. Then clicking on the point creates our popup!
Note how you can use HTML styling methods like <b> to style your text (in this case, to make it bold).
If you are importing large amounts of spatial data, sometimes it already has attribute data attached to it. For example, in our GeoJSON data imported from Google Earth above, each point already had a "Name" and "description" attribute. Instead of retyping that into our code, we can call the attribute directly in our popups!
When importing your spatial data, you can automatically create a popup for every piece of spatial data in your dataset. We do this by adding the "onEachFeature" option when adding the GeoJSON to the map; here, this calls a function called "popUp" which will use to define our popup.
Now you can write and customize a function called popUp to create whatever type of popup information you want!
Now every piece of spatial information in your Google Earth geojson will have a popup box saying the name of the space it is associated with (assuming it has an attribute titled "name" that it was imported with, such as a header from an excel spreadsheet) and its description, and it will wish you a nice day!
There are many other things you can do with popups, including addeding images, videos, and hyperlinks. Check out the Leaflet documentation for much more info about popup boxes.
The easiest way to immediately share your map online is to upload it to GitHub.
1) Create a GitHub account
2) Click Start a Project, which will then ask you to name your project and provide a brief description.
3) Upload your entire mapping folder into the project depository by clicking Upload an existing file on the main repository page
Now that your map files are in, anyone with access to your repository can download your mapping files. You can also set up a unique url to display your map in just a few easy steps
1) Go to Settings on the central toolbar
2) Scroll down to the Github Pages portion of the settings
3) Under "Source" select Branch: main to set your main branch as the viewable page. Then hit Save.
4) The page will refresh. If you scroll back down, you should now see a statement in the Github Pages portion saying: Your site is ready to be published at YYY. Now you have a URL for your map! You can share this URL with others or put it on the main page of your repository in the About or Readme sections.
That's all for now! In the future, there will be tutorials on how to add new raster layers to your map, turn on and off layers, and other functions!
The Introduction to Text Analysis tutorial, created by , BC Digital Scholarship Librarian, provides a basic introduction to text analysis concepts, the tools Voyant and Lexos, and how to create a corpus. Example texts are humanities-oriented, but text analysis can be used in any disciplinary field.
In this tutorial, the tools and are being used. They have been chosen because they are "out of the box," meaning they don't require any coding, they are relatively easy to use, and they have many capabilities. As such, they are great tools for getting started in text analysis.
Generally speaking, "out of the box" tools tend to be blunter instruments in that they do not allow for the level of customization and specificity that using coding and scripting languages like Python and R do. Consequently, if you want to run more in-depth text analysis queries, you would eventually need to gain some coding and scripting skills. If tools like Voyant and Lexos serve all of your text analysis requirements, then they might be all you need.
Lexos, which is more complex than Voyant, allows some more in-depth work and can be used for scraping, scrubbing, and cutting text in addition to conducting analyses. Voyant has a flexible and friendly interface that provides a lot of different ways into a text. In this tutorial, you will learn about a way in which Lexos and Voyant work well together.
When first using Voyant and Lexos, it is good to look over their guides and other helpful information. Voyant has an , with the and tool instructions being particularly helpful. The question marks on the Voyant interface also provide information. Lexos has helpful information within the tool. Click on a question mark to learn what something does, or click on "Help" in the top right of the navigation bar to open the help window on the left.
When conducting a text analysis, it is important to keep in mind that:
1.) Word meaning changes over time.
While it might be understood, it's important not to forget that word meaning changes. One can use a source like the Oxford English Dictionary to look up the particular meaning of a word at a particular time.
2.) The word context is key.
In many, if not most, text analysis undertakings, word context is crucial to the analysis. Exceptions can occur when, for example, one is only interested in the number of times a word appears and not in the way the word is used.
3.) There may be issues of omission in the corpus.
It's important to keep in mind when exploring or creating a corpus that there may be issues of omission. People of color, women, and other marginalized groups have been published less throughout history and, therefore, a massive corpus--like Google Books or HathiTrust--will skew white and male. (Other areas of omission can be based on things like language, geography, time period, etc.) Moreover, it's important to consider what gets digitized. There can be (and no doubt is) bias in the decisions that drive the selection and funding of what ends up online.
4.) There can be quality issues with the corpus.
Often texts used in text analysis come from books and documents that have been OCR'd. (or optical character recognition) converts images of text into digital (machine-readable) text. Due to things like the quality of images and scanning mistakes, there can be OCR quality issues and, therefore, text errors.
Below are two examples of how OCR errors can occur. The one on the left is from a first edition of the 18th-century novel, The Life and Opinions of Tristram Shandy, Gentleman. With books from this period, you get characters such as the long s ( ſ ) and, often, ink bleed through, and foxing (all of the little dots that come from age) which can impact OCR. (These kinds of issues used to be much more of a factor before advancements in OCR technology.) The example on the right shows a scanning mistake made when the book was moved during the process. (Even with the advancements of technology, OCR issues are unavoidable in this case.) When working with a large or massive corpus, these kinds of errors might be inconsequential as long as there is a small enough number of them. With smaller corpora, such errors can have a greater impact and skew text analysis results.
The following resources provide text for text analysis projects.
(includes plain-text [“full text”] access to books, issues of magazines, etc.)
(BC library resource)
(large number of texts available in variety of forms, including plain text; texts are accessed one at a time)
(16 million volumes, mostly in English)
(12.8 million pages of American newspapers)
(narratives & literature from the American South)
(large collection of classical texts, much of it encoded in TEI/XML)
(ca. 50,000 early English books, many encoded in TEI/XML)
(197,745 London criminal trials, 1674-1913)
(debates & journals of the Canadian Senate & House of Commons)
(Parliamentary debates, 1901-1980)
(UK Parliamentary debates)
(see also ; 10,000 premodern Islamicate texts)
and (efforts to use computer vision to recognize handwriting)
(557 classical texts linked with a gazetteer of the ancient world)
(widely used corpora of American English)
(American adult fiction, 1774–1900)
(170K hours of captioned news programs; see for information on access)
(nearly 2 million pages of media-related books and articles, 1875-1995)
(classic Christian texts)
(1.8 million NYT articles + NYT-supplied metadata)
(many datasets from European libraries & archives, from papyri to photographs to newspapers)
(nearly complete run of Foreign Relations of the United States; see to obtain full text)
(a huge collection of websites, texts, audio, and other media, available for bulk download via wget)
(a catalog of Twitter datasets that are publicly available on the web)
(an effort to develop tools to analyze features of digital texts)
(“220,579 conversational exchanges between 10,292 pairs of movie characters”)
(repository of life sciences books, articles, and preprints)
(565 million documents collected by the National Library of Australia, including a sizeable collection of newspapers)
(4 million-word sub corpus of the 100 million-word British National Corpus, with parts-of-speech tagging in XML)
TEI-Encoded
Constellate provides: 1. access to over 29 million documents, including JSTOR, Portico and etc. 2. Research Notebooks (Jupyter Notebooks) provides pre-built code snippets for a number of text analysis tasks. Text data and notebooks can be utilized together or separately; data downloaded in JSON format.
Add a control point to the maps
Move a control point that has already been added to the map
Pan around the map
Change the basemap to a Custom, already georectified basemap of your choosing (if available)
Swap between the initial basemap and satellite imagery, depending on which is easier to georeference your map
(BC library resource)
Resources from
Time: 9am - 5pm (Days will go no later than 5pm, but we expect that most will end at 4pm or sooner.)
Description: Each day will consist of a variety of presentations, discussions, and hands-on activities. Breaks are scheduled throughout and lunch will be an hour. There will be 20-30 minute DS-related presentations or participant lightning talks during lunches. Advanced topics will be covered at the end of the day and attendance is optional.
Topics: Welcome and Orientation, Introduction to DS Methodologies & Issues of Justice, Anatomy of a Digital Scholarship Project, Project Evaluation, User Experience, Tools Account Creation and Installation
Lunch topic: Digital Pedagogy and Lightning Talks (Jess and Marina)
Topics: Introduction to Data, Mapping
Advanced Topic: QGIS
Lunch topic: Lightning Talks (Isabel, Lyn, and Christy)
Topics: Data Visualization, Network Analysis
Advanced Topic: Mapping in Tableau
Lunch topic: Open Access, Data, and Education
Topics: 3D Modeling and Immersive, Text Analysis
Advanced Topic: Text Analysis with Python
Lunch topic: Intellectual Property
Topics: Digital Exhibits, Story Maps, Project Planning
Lunch topic: Center for Innovation in Learning
The following information will be covered on the first day and will help us prepare for the days to follow.
Documents
Spreadsheet - Please add your info on the first day when you arrive
Shared notes & detailed schedule doc - This is our group notes doc
Accounts to Create
ArcGIS Online (Select the "Public Account" option)
Google Colab (for the optional Advanced Text Analysis session)
Software to Install
QGIS (for the optional Advanced Mapping session)
Melanie Hubbard, Digital Scholarship Librarian, incubator organizer (contact)
Topics: Introduction to DS Methodologie, User Experience; Digital Pedagogy; Open Access, Data, and Education; Text Analysis; Digital Exhibits; Story Maps
Matt Naglak, Digital Scholarship Librarian (contact)
Topics: Anatomy of a Digital Project, Project Evaluation, Mapping, 3D/AR, Project Planning
Allison Xu, Data and Visualization Librarian (contact)
Topics: Introduction to Data, Data Visualization, Network Analysis
Gabe Feldstein, Digital Publishing and Outreach Specialist (contact)
Topic: Open Access, Data, and Education
John FitzGibbon, Associate Director of Digital Learning Innovation (contact)
Topic: Center for Digital Innovation in Learning
Elliott Hibbler, Scholarly Communication Librarian (contact)
Topic: Intellectual Property
Participants will be creating projects after the incubator week and presenting on them in early fall 2022 during which participants will receive feedback to help them further develop their work.
It is understood that some participants will be able to get further with their projects than others. The DS Group will provide workshops that dive deeper into DS tools, consultations, and other types of project support.
Projects may be one of the below. (The descriptions are deliberately open-ended as we want you to get out of the process what is most meaningful to you.)
A well-articulated plan for a researched-based DS project and a prototype that demonstrates aspects of how the project will work.
A well-articulated plan for further developing an existing researched-based DS project and a prototype that demonstrates aspects of how the new developments will work.
A digital pedagogy/DS-based lesson and a prototype that demonstrates digital component(s). A "lesson" may involve the creation of an assignment, a learning digital object, or both.
In the Digital Studio, we have access to equipment for both 3D modeling and laser scanning. The aim of this hands-on experiment is to explore and try to create 3D models, perhaps for use in the Alternate Reality (AR) scene to be created in the next part of the incubator.
Photogrammetry setup with Lightbox and basic DSLR camera, good for objects down to about the size of your fist
Hand-held Laser Scanner (borrowed from Schiller), good for quick scanning of larger models (and people!)
Hand-held laser Scanner with roundtable setup, good for objects down to 3cm
Multiple Collections: Anchor collections from JSTOR and Portico, with additional content sources continually added.
Data Download in JSON.
Open content - bibliographic metadata, full-text, unigrams, bigrams, trigrams.
Dataset Dashboard: Easily view datasets you have built or accessed.
Dataset ID: unique identifier of the extracted dataset, can be used for retrieval in research notebooks.
Analyze: tutorials version for leaning how to use the research notebooks.
Download metadata in CSV file format, raw text data in JSON file format.
Built-in visualizations available by clicking the link under word cloud.
This notebook finds the word frequencies for a dataset.
Step 1: Click "Jupyter" and go to the main directory Step 2: Go to folder "Data" Step 3: Check "stop_words.csv" and click Download
This incubator section is in development. Expect to see more information soon.
The 2022 summer digital scholarship incubator will cover a wide cross-section of digital scholarship methods and tools. Broader topics such as project evaluation, user experience, and intellectual property, will also be incorporated. The incubator will culminate in the creation of a small project that will be presented early in the fall semester.
Update: We are excited at the number of disciplines being represented by our twelve participants. They include Art History, Classics, Education, English, History, Nursing, and Social Work.
When: June 6th-June 10th
Where: Center for Teaching Excellence (O'Neill 246)
Questions?: Email digitalscholarship@bc.edu
Reality Composer (RC) for iOS, iPadOS, and macOS makes it easy to build, test, tune, and simulate AR experiences for iPhone or iPad. With live linking, you can rapidly move between Mac and iPhone or Mac and iPad to create stunning AR experiences, then export them to AR Quick Look or integrate them into apps with Xcode. This tutorial will introduce the basic functionality of the tool and review the interface. Please note that screenshots for this introduction were created using Reality Composer on a MacBook. The screen layout is slightly different when working directly on an iPad, but the steps are the same.
When you first open up a new Reality Composer project, your first decision to make is what kind of anchor you want for the project. The Anchor type will determine what requirements are needed to open your scene in AR. When you start up RC, you are asked to select an Anchor typer for your project. There are 5 relatively-straightforward anchor types to chose from in Reality Composer:
Horizontal - will look for any horizontal surface (e.g. a table or the floor) to open your experience on
Vertical - will look for any vertical surface (e.g. a wall) to open your experience
Image - will look for a specific defined image that you determine (e.g. a painting or business card) to open your experience around
Face - will look for a person's face (as recognized by Apple's camera) to open your experience around
Object - will look for a previously scanned physical object (see details at the end of this workshop) to open your experience around
Once you have chosen an anchor, the main window of Reality Composer will open up, which changes slightly based on the anchor you have chosen (above, the horizontal anchor). You can also change your anchor if needed in the right window Properties pane (as well as change the overall physics of the scene if desired, a la allow things to fall through the floor. This can also be set object-by-object later).
The window also opens with two objects already present, a cube and a little placard showing some text. Before checking those out, let's look at the other options on the main toolbar. The most important ones for us will be the Add button, the Behaviors button, and the Properties button, but we can quickly review them all.
The Scenes option opens up a sidebar that allows you to name the "scene," or AR view, you are currently working in, as well as create new scenes that can be linked by certain actions (see below).
The Frame button just lets you navigate more easily around your scene by allowing you to zoom in on certain objects or on the scene as a whole.
The Snap button allows you to snap objects to other objects or to certain horizontal or vertical lines within the scene (similar to the Photoshop snap tool)
The Modify tool allows you to swap between adjusting an objects position/rotation and adjusting its length/width/size (this can also be done in the properties pane, as we will see)
The Space button swaps between "global" and "object-oriented" coordinates, allowing you to move an object along various axes within your scene
The Add button adds more models to your scene, of lots of different types!
The Play button (also AR button when working on an iPad/iPhone) allows you to test our your scene on a device. The Send to button (Mac only) allows you to send your scene directly to a linked iPad for testing
The Info button checks for updates to your models/scenes
The Behavior button allows you to assign different behaviors to your object based on specific prompts (e..g. the object it tapped on, it flies in the air)
The Properties button allows you to edit the properties of a specific model or of the scene as a whole.
Let's play with some objects, starting with what comes in our default scene.
Clicking on the cube will automatically open the properties pane, which allows you to directly edit its various properties like width, height, color, etc. You can also name the object, transform it in space (if you want to set something exactly) as well as decide if it will obey the laws of physics (e.g. fall to the surface if it appears up in the air). You can also edit directly in the viewing window once an object is selected; Arrows pointing along axes allow you to move an object, which clicking the tip of an arrow will allow you to rotate an object around that axis. Give it a try!
Clicking on the placard will yield a similar pane, though this one also allows you to edit the text appearing on the sign. Each object you add will have its own individual properties that you can edit here. For now, edit these two items however you wish!
If you want to add a new object, just click the + Add button. There are many pre-installed 3D objects to work from, as well as signs that can hold text and "Frames" that can hold attached Images. You can also introduce your own objects.
For now, I will drop a plane in our scene, by clicking and dragging it in or by double-clicking. I then rotated it around to be pointing away from us.
Let's do one last thing before trying out our model in AR....adding a behavior! I'm going to select the plane, and then click the behaviors button on the toolbar, and a new pane will open up along the bottom of the page
Clicking the + button will open the new behavior pane, where there are several pre-set behaviors you can choose from, or if you scroll down to the bottom you can create a custom behavior. For now, I will choose Tap and Add Force so we can give our plane some energy
Now, our behavior pane has its first behavior! You can rename the behavior (currently called Behavior) in the left pane; the Trigger pane allows you to determine what action will make the behavior start (in this case, tapping) as well as which affected objects have the trigger attached (in this case, the plane, but you could add more!)
Finally, the action sequence allows you to create a series of effects that happen, either simultaneously or one after another, when the trigger is...triggered. In this case, we are going to have the plan start moving forward at a certain velocity.
So we can make our plan move forward, but that's not really taking off. Adding a second force in an upward trajectory after a moment will make this takeoff look a bit more realistic. To add further actions to the sequence, simply tap the + sign next to the words Action Sequence in the Behaviours pane. This will then pop up a bunch of different pre-coded behaviors you can choose from.
In the image above, I added two new behaviors, a Wait Behavior and a second Add Force behavior in a 45 degree upward angle. Importantly, I direclty attached the Wait behavior to the first Add Force behavior (just a drag and drop) which means that these two actions will begin simultaneously, and the 2nd Add Force will not start until the first set is complete. This means our plane will move forward a certain amount before briefly "taking off".
Now that we have an experience, we need to test it out to see how it is functioning. There are a few ways to do this.
If you are working on an iPad or iPhone, its easy! Just hit the AR button on the top toolbar to open the experience in AR, and then the Play button to start the experience from the beginning.
If you are working on a Mac, it's a bit more difficult. On the one hand, if you hit the Play button on the top toolbar, the experience will start, but will obviously not be in AR, making testing a little bit difficult (though you could certainly test the functionality, as in the images below where our plane has flown far off into the distance.
There are other options for testing from a Mac, however. If you have an iPhone or iPad handy that has Reality Composer installed, you could connect it via a USB/USB-C cord to your computer. If you then open Reality Composer on both devices and hit the Send To button on your Mac, the file will open in Reality Composer and be editable/testable on your iPad!
Note that if you are using special imported models, they may not be available on your second device, unless they have been imported there as well.
Another option is to simply export the file and share it as a fully .reality file. To do this, go to File --> Export, and pick either to export the entire project or just the current scene. After saving it on your computer, you can navigate to that folder, select the .reality file, again go to File --> Share in the Finder menu, and choose how you want to share the file (text, AirDrop, etc) to your iPad or iPhone. Opening it on your iPhone or iPad in this way does not require Reality Composer, as it is using the built in Apple Quick Look platform (you can also share your experience with other people in this way!
We are going to add one last action that will make our action "replayable": returning the plan to its original location after a certain amount of time. Otherwise, we tap the plane once and it is gone!
This can be done by adding one more action to our action sequence, a Move, Rotate, Scale To action that will move our object back home. Adding this action to the end of our action list, and then selecting the Plane as our affected object will allow you to choose where you want the plane to return to. In this case, I will adjust it so it ends up back where it started (by moving the plane in the image back to the left to the starting place). Also noticed I added one more Wait action so that the plane will wait one second after it stops being impacted by the Force to return home.
And that's it! Now the plane will return to its original location (and might even let you see the ingrained physics on the way as the plane hits the block as it returns). Project testing videos and the files created with this tutorial can be found below. There is obviously a ton of other behaviors and models to play with, so give it a try!
As a bonus, you could use the Hide and Show behaviors to make the plane seem to magically "appear" back in its home location at a certain moment. See if you can make it work!
A DS Project can roughly be divided into three sections focusing on the project's data, the presentation of that data, and the contextualization of that data
Reality Composer is free to download from the app store for iPad or iphone, or downloaded as part of the free (Apple Account Required). A warning that XCode is quite a larger program for developing applications, so may not be feasible for older MacBooks or those without much storage space. Running Reality Composer on a MacBook also requires you to send your project to an appropriate iPad/iPhone for testing (see below) or open it in an emulator using xCode.One important fact to note is that Reality Composer projects (i.e. what opens in Reality Composer) are saved as .rcprojects while sharable experiences that open directly onto an iPhone or iPad with the Quick Look tool are .reality files.
This portion of the incubator is focused on 3D and immersive technologies, with a focus on 3D object creation using photogrammetry (photographs) and laser scanning (lasers) as well as the creation of basic alternate-reality (AR) scenes with Apple's RealityComposer platform.
Some Initial questions to consider when developing a DS project
Data is a very general term but includes all the information you have gathered to support the defined goal of your project. Data may be qualitative or textual (e.g. a transcription of a historical document), quantitative or numerical (e.g. a breakdown of racial demographics in Boston between 1940 and 1960), audio/visual (e.g. images, audio recordings, video), spatial (e.g. the locations of excavated ancient Greek theaters), or 3D (e.g. a 3D model of a rum keg from 1776). Data may have accompanying metadata (data about your data) describing where the data comes from and may contain other useful attributes for searching and filtering your dataset.
What kind(s) of data might my project contain?
Am I creating a dataset myself or using data that is already accessible? If creating it myself, what work might need to be accomplished to create and organize my dataset?
What ways might I want the user to engage with the raw data (in terms of searching/filtering/downloading, rather than visualization)?
Do I need permissions to use any of the data?
Over the course of the next few days, we will be talking about a variety of ways data can be presented. Data presentation can be undertaken in many ways through DS tools, ranging from a tradititional document/image viewer to an interactive map to a data dashboard created through tools like Tableau (and many other ways). The key goal of data presentation is to make your data accessable and comprehensible to the user in order to support the goals and narrative of your project.
A few initial data presentation questions that may be useful as you begin to develop a digital scholarship project:
How can data visualizations help support the goals of my project?
How can data visualizations help users understand my dataset better?
What digital scholarship approaches or tools might be useful for helping me analyze and visualize my data (textual analysis, mapping, charts/graphs, exhibitions, etc)?
Are opensource tools for accomplishing my goals already available, or will I need funding for custom development?
Sometimes overlooked, data contextualization can be just as important as other aspects of a digital scholarship project. Data contextualization includes all the necessary information a person needs to know about how your data was gathered, what issues and biases it may have, as well as any useful historical or cultural information. It may also describe how your project was constructed on a technical level. Finally, it may also include tradititional scholarly information in the form of citations, bibliographies or further readings.
Some initial questions around data contextualization:
What innate biases might my data have?
What does my audience need to know in order to understand my data properly?
How can my audience learn more about the subject discussed?
What is the goal of my project?
Who is the prospective audience for my project?
What areas of DS will be most useful for my project to use to accomplish my goals?
Do I have or need a project team to accomplish my goals?
What specific technologies or expertise will I need for my project to be considered a success? Are there members of my project team with those skills, or do I need to develop them.
Do I have any copyright concerns?
What is the timeline for my project? Are there benchmarks I could put in place to ensure the project is moving along at an appropriate pace?
Do I have or need funding for my project?
Where will my project be hosted/what kind of support am I looking for (institutional vs personal)?
Exercise visualization examples
Does this project have a clearly defined message or research question? What is it?
What is being visualized?
What are some of the applied tools or methods?
What is the underlying software or platform?
Does the visualization add value or impact and how so?
What kind of data are they using? What sources is it drawn from?
Identify any usability issues and what improvements could be suggested.
Who are the collaborators, project team members, or other partners on this project?
Is there anything else worth pointing out not covered above?
Critique a Visualization Project ()
Every project has the things it does well and things that could be explored further. Here are a few examples of DS projects currently out in the world
The DH Awards site contains a great number of excellent DH projects to explore; some of the below are pulled from that resource. There are also those from the BC DS site and many DH group sites from universities across the country openly available to explore.
New York Restaurant Keepers (part of NY Immigrant City project out of NYU)
Digital Harrisburg (Harrisburg University / Messiah University)
Atlas of Early Printing (University of Iowa)
Bombing Missions of the Vietnam War (ESRI)
Plotting English-Language Novels in Wales
Battle of Hong Kong 1941: A Spatial History
Mirror of Race (Boston College)
Byzantine Textiles (Dumbarton Oaks)
WHEN MELODIES GATHER: ORAL ART OF THE MAHRA (audio exhibition)
The Resemblage Project: Stories of Aging
Tudor Networks (Stanford and Queen Mary University of London, among others)
Coins: A Journey (Münzkabinett Berlin)
Historiography of the American Revolution (timeline)
The Dream of the Rood (EVT Viewer)
Digital Gabii Vol 2 (Michigan)
Women in Science 3D Lab (Byrn Mawr)
In groups, explore these exhibits. Take a quick look at all of them and then choose two to focus on and discuss the questions below.
Jane Austen: Books and Biography, Platform: Exhibit.so
Racing to Change, Platform: A developer customized site
Japanese Digitization Project, Platform: Scalar
A Gospel of Health: Hilla Sheriff's Crusade Against Malnutrition in South Carolina, Platform: Wordpress (.com and .org)
#LovecraftCountry, Platform: Wordpress (.com and .org)
What is your general impression of the exhibit? (How inviting and effective is it?)
How central are the exhibit objects to the story being told?
How does the visual design of the site affect the tone/feel?
What do you think about the flow of the exhibit and the navigability of the site?
Hands on:
We will be working with sample networked data sets based on the and the project. These data sets include a list of names and relationships for early seventeenth-century Quakers.
Download and CSV files to your Desktop
The effectiveness of a DS project can be evaluated in a number of ways, including clarity, ease of use, and effectiveness in accomplishing its goals.
Digital scholarship is a constantly evolving and expanding world, meaning new projects are contantly being developed. While at first this may make it a bit confusing, it does have the advantage that projects with similar goals to yours (either in presentation or in dataset) most likely exist, offering a place to start from in terms of project conception and inspiration.
Performing this kind of "environmental scan" (similar to creating an inital research bibliography!) can often prove fruitful. This makes the ability to quickly evaluate a DS project beneficial, offering quick suggestions for tools or techniques you might explore and utilize or avoid entirely.
Attached is short document we use when quickly evaluating a DS project. It is broken down into several sections useful to highlight and consider as you move forward in your own project. Additionally, several links are provided focusing on the evaluation of DH projects, particually as DH is becoming more accepted within disciplines as a means for promotion and tenure.
Import csv files from local directory pandas tutorial: pandas.read_csv
Import Excel files pandas tutorial: pandas.read_excel
Read data by Google Sheets Name
This portion of the incubator is focused on an introduction to spatial data (vector and raster), with workshops on creating spatial data and finding and georeferencing historical maps. Finally, the different datasets are combined in an ArcGIS online map.
Bonus: QGIS Workshop (in progress)
Additional Info
Items brought into Exhibit.so require IIIF manifest URLs. IIIF is a standard that enables rich functionality across all web browsers for different kinds of digital objects, e.g., zooming into an image. Learn more and find IIIF resources.
1.) Open Exhibit.so and the object spreadsheet
2.) In Exhibit.so, scroll down and click on the Create an Exhibit button.
3.) Select and Add Information:
Select Scroll template
Add Title: DS Incubator (or whatever you want)
Add Description: DS Incubator Exercise (or whatever you want)
Select Public
Click Create Exhibit
When you get to the exhibit page, click Share (bottom left) and copy and paste the URL into a location you can save it to be able to access and edit the exhibit later.
4.) Adding Items
A single image of an Anti-Slavery Almanac:
The full text of an Anti-Slavery Almanac:
1.) Saving & sharing your exhibit
2.) Adding a simple object
Click the Add Item button (bottom left) and past a IIIF manifest URL into the IIIF manifest URL box and click Import.
Click on the item and click on Add to Exhibit
Click on the plus sign...
and copy and paste the following "dummy text" into the text box:
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo.
Click on the checkmark (see below).
Click on the plus sign again and add some dummy text to the text box.
Hover on the image and you see a plus, minus, and rotation symbol appear. Click the plus sign to zoom in and click and drag the image to frame the illustration well. Click the checkmark.
3.) Adding a compound object
Add the following object the same way you did the object above: Click Add Item, paste the IIIF manifest URL (available below), click on new Item Import, and Add to Exhibit
Once the item is added, navigate to page 13 of the Almanac (image 18 of the scans)
Like you did before, click the plus sign, add some text, and click the checkmark
An opportunity to explore one of the spatial databases discussed in the presentation to look more closely at how their data is organized and to create your own simple spatial dataset.
Create a few spatial points of interest using the format we saw above ObjectID, X (long), Y(lat), or any other attributes you want using GoogleSheets/Excel/OpenOffice; or explore one of the databases above and practice downloading and opening the datasets to get comfortable dealing with vector datasets.
Remember to save your file as a .csv when you are done!
Some Boston Spatial Databases:
Possibly useful datasets for exploring questions about segregation and redlining.
Hospitals (download CSV)
Neighborhood boundaries (download Shapefile)
Boston Social Vulnerability (download Shapefile) [play with attribute and drawing style options]
Public Schools (download CSV)
Non-Public Schools (download CSV)
Want to check your csv data real quick? Go to Google My Maps, create a New Map, and click "Import" to import your CSV data to make sure it's looking good. Otherwise, we will be checking it later when we get to ArcGIS online!
1.) Explore the project Digital Dante: Original Research & Ideas and consider:
Purpose: How clear the intention/purpose of the project is made
Taxonomy: The effectiveness and clarity of labels and headers
Navigation: How easy it is to move around the site and get to specific info/sections
Wayfinding: How well can you find your way back to information you previously saw
Visual design: How the colors, fonts, visuals, layout, interface, etc. affects your ability to use the site, engage with the content, and make you feel
Accessibility: Are there any potential accessibility issues
2.) Draw a rough diagram of the information architecture. (You can keep it simple and focus on the higher end of the hierarchy (e.g., the first three tiers).)
Spatial data adds a geographic dimension to a qualitative and/or quantitative data set, situating it in a particular location within a coordinate system relative to other data points. (The coordinate system can be a real-world system or a locally created one used to meet the needs of a particular project.)
Spatial datasets, in general, come in two distinct forms, vector data (points, lines, and polygons) and raster (or pixel data). Raster and vector data can come together in the creation of a wide variety of mapping projects, from a traditional figure with an explanatory legend and caption, such as might appear in an academic text, to an online interactive platform that allows for the searching or filtering of thousands of pieces of spatial data or hundreds of historical maps.
Vector data includes points, lines, or polygons (shapes made up of straight lines) containing spatial information that represent some sort of feature or event in a physical or imagined landscape and may contain other types of qualitative or quantitative information, called attributes. A point may represent a tree, a city, or a moment in time. Lines might indicate the street grid of a town, the path someone traveled across the world, or a social link between two communities. Polygons can mark the boundaries of a country or voting district, the catchment area of a river, or a single city block.
For example, the relatively simple and ongoing World Travel and Description project from the Burn's Library collection pictured below uses vector point data to offer a selection of images and accounts from individuals and their observations about how the cities and landscapes they visited appeared. Users can filter the point data by data or search for particular location names in the search bar.
Raster consists of "cells" of data covering a specific area (its extent), with attribute values in each cell representing a particular characteristic. It may still consist of points, lines, and polygons, but these shapes are themselves composed of pixels (the way a jpeg or other image file type is).
Data of this type may take many forms, such as satellite imagery containing vegetation or elevation data, precipitation maps, or even an historical map, which has been given a spatial reference. Unlike vector data, raster data has a particular resolution, meaning each pixel represents a particular geographic region of a specific size.
Most projects combine various forms of vector and raster datasets.
If you do not already have an ArcGIS account, you will need to create one to get started. If you cannot get an institutional account, you can create a free public one. Note that free accounts have limited functionality and do not allow for embedding, incorporating audio clips, and creating image galleries.
To create your free account, click on "Sign In" in the upper right corner of the StoryMaps site, which will take you to the Esri sign-in page. Then click "Create a public account" and complete the account creation process.
Once you have signed into your account, click "New Story" in the top left corner to create your StoryMap.
From the "New story" dropdown, you can select "Start of scratch," so that your project has does not preexisting block styles in place, or you can choose to start with a "Sidecar," "Guided map tour," or "Explore map tour" content block types.
In the space where it says "Untitled," you can give your story a name. Feel free to add a subtitle as well. (You change them anytime, so feel free to use temporary ones.)
StoryMaps automatically saves, so there is no need to save after launching your new story or after making any changes.
Agisoft Metashape (available in the Digital Studio) is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data (point clouds, meshes, etc.) to be used in GIS applications, cultural heritage documentation, and visual effects production, as well as for indirect measurements of objects of various scales. Its user-friendly workflow sets it above many of the other 3D model-processing platforms.
To create a 3D model from photographs takes only a few easy steps, which are nicely summarized for you in Metashape's "Workflow" dropdown. It even grays out options that are not available to you yet!
In this introduction, we will go step-by-step through the creation of a simple 3D model, in this case using photos taken of the Doug Flutie "Hail Flutie" statue taken outside of Boston College's Alumni Stadium in Fall 2020 (evident from the mask on Doug's face). This model will be "in-the-round," as it were, but not a full 360 degrees, as the statue is fixed in the ground.
These steps are:
Adding your photos
Aligning your photos/building a dense cloud (if necessary)
Building your mesh
Building your texture
Making your model to Scale
Exporting your model
Let's get started!
Once you are finished taking photos of an object with your camera or camera phone, you will want to copy the images off of your device and place them in a recognizable location on your computer.
For this example, I have created a folder called "Flutie" and placed it on my desktop. Inside of it are all the photos I took of the statue. For tips on taking photos of objects either inside the Digital Studio lightbox or "in the wild," check out our page on Tips and Tricks for Taking Photos for 3D Model Creation or our BC Digital Scholarship Workshop video.
Now that the photos have been offloaded to the computer, it is time to add them to our Metashape project. Our Workflow drop-down makes this easy, as you are able to select individual images to add to the project or an entire folder, which works great in our case.
The photos will load (185 in our case). Metashape will then ask you to choose what type of scene you want to create. For our case, Single Cameras is the proper option, as each image represents a single photo of the object.
The second option, Dynamic scene (4d) is for the reconstruction of dynamic scenes captured by a set of statically mounted synchronized cameras. For this purpose, multiple image frames captured at different time moments can be loaded for each camera location.
Now you can see that your images are loaded in the Photos pane at the bottom of your screen. This is a good time to save your project. It's suggested that you save the project in the same folder as the images, as the project and images are now linked together.
Now that the images are loaded, it's time to actually start processing! We return to our Workflow dropdown and now notice that the Align Photos option is available.
Aligning the photos is the part of the process which takes the longest. At this stage, Metashape finds the camera position and orientation for each photo and builds a sparse point cloud model based on matching pixel groups between images. This point cloud will become the basis of our digital model.
Choosing Align Photos from the dropdown gives you a few options. The default options are generally fine, although the Accuracy setting will determine how long the process takes. In general, using the High accuracy is a good choice unless you have a very large number of photos (>200), in which case Medium may be a better choice. But really it depends on how much time you have. Aligning photos for the Flute statue with 185 photos took 20-30 min.
At this point, the processing box will appear and let you know as the software aligns your photos, starting by Selecting Points, then Selecting Pairs, Matching Points, which will take the longest, and Estimating camera locations.
Since the pixel groups used for matching are randomly selected, aligning photos several times may produce different results. If you are having difficulty getting your photos to align, try reprocessing (make sure to click Reset current alignment from the Align Photos options)
Sparse Point Cloud, the result of our photo alignment, can be seen above. Notice that the general shape of the statue is already apparent, which means that the photos were taken reasonably well. The blue squares represent the calculated locations of where the images were taken. Clicking on an image in the Photos pane will highlight the image chosen on your model, useful for troubleshooting issues. Finally, small checked boxes will appear next to the images that have successfully aligned; if a number of your images have not aligned, you will want to rerun the Align Photos process or retake your images to cover the portion of your object that is having issues. In our case, all the photos were aligned, which is fantastic!
If processing in the Digital Studio, be aware that inaction on the computers may eventually log you out and cause you to lose your work. Keep an eye on the processing, especially in the Aligning Photos stage!
You might notice from the image above that there is a lot of the surrounding environment that appears in our point cloud, particularly the concrete surface surrounding our statue. This can often take place if it is difficult or impossible for your object to take up the majority of each picture taken.
This moment allows for the opportunity to clean up the point cloud before building your mesh. There are a variety of ways to clean the point cloud, but the easiest is to use the Select and Crop tools, which works the same way as cropping an image
First, use the Select tool to roughly select the object itself. Once the area you want to keep is selected, use the Crop tool to crop everything else out. Easy!
The Delete tool seen above works in the opposite way of the crop tool. Clicking the X will delete whatever points you have selected, which offers a second way to clear out unwanted points!
For objects with a lot of detail, which need a very high-resolution mesh, it may be necessary to build a Dense Cloud after aligning your photos. A dense cloud is simply what it says, a denser point cloud created from points, which align between your photos.
Choosing this option will again ask you how high you want the accuracy of your dense cloud to be. Note that the dense cloud process can take several hours depending on the number of photos taken, so be sure you have the time available before starting it.
If working to build a full 360 degree model, building a dense cloud is often necessary to merge the top and bottom of the model. This process will be covered in a future tutorial.
In the dense cloud seen above, note how some cleaning of the point cloud using the Delete and Crop tools would be useful before moving on to the creation of the mesh, as described in the section above.
Now that your photos are aligned, a new option appears in our Workflow dropdown, building the mesh of our digital object. Fortunately, this process is much faster!
In short, this step takes your point cloud, which simply represents a group of points floating in space, and turns it into an actual 3D surface, a mesh, by connecting these points together.
The Build Mesh options are not too complex. It allows you to choose your point cloud (Sparse or Dense, if you made one) and pick the number of faces you want your mesh to have. The final option is Surface type, which should generally remain on the Arbitrary (3D).
The other option, Height field, is optimized for modeling planar surfaces, such as terrains or base reliefs. It should be selected for aerial photography processing as it requires a lower amount of memory and allows for larger datasets processing.
The last major step in processing our model is to build the texture. The texture is the colored overlay, which will sit on top of our created 3D mesh. Again, we simply return to our Workflow dropdown and select Build Texture!
The options are a bit more complex than other steps, though in general, the default options are fine. The breakdown is as follows, though in general, you are fine with the defaults:
Texture type: Diffuse map (Default) is the normal texture mapping, Occlusion map is used for calculating ambient lighting so is not necessary for basic models
Source data: will change based on the texture type. For our regular Diffuse texture, it will be the images
Mapping mode:
Generic (default): program tries to create as uniform texture as possible.
Adaptive orthophoto: the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions
Orthophoto: the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces an even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.
Spherical: appropriate only to a certain class of objects that have a ball-like form. It allows for continuous texture atlas being exported for this type of object so that it is much easier to edit it later.
Single photo: generate texture from a single photo. The photo to be used for texturing can be selected from 'Texture from' list.
Keep uv: generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolutions or to generate the atlas for the model parametrized in the external software.
Unsure? Just use Generic and see how it looks
Blending mode:
Mosaic (default) - implies a two-step approach: it does blending of low-frequency component for overlapping images to avoid a seamline problem (weighted average, the weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image), while high-frequency component, that is in charge of picture details, is taken from a single image - the one that presents a good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.
Average - uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for a high-frequency component in mosaic mode.
Max Intensity - the photo which has the maximum intensity of the corresponding pixel is selected
Min Intensity - the photo which has the minimum intensity of the corresponding pixel is selected.
Disabled - the photo to take the color value for the pixel from is chosen like the one for the high-frequency component in mosaic mode.
Unsure? Go with the default Mosaic first; if you are getting weird blendings of color, try Average instead
Texture size/count: Specifies the size (width & height) of the texture atlas in pixels and determines the number of files for texture to be exported to. Exporting texture to several files allows to archive greater resolution of the final model texture. Again, the default is probably fine for beginning models
In some cases, you will want your model to be "to scale" so that it can easily be used in other immersive technologies (gaming, AR, VR). This process is quite easy in Metashape, as long as you have taken a few distinct measurements of various portions of the object you are modeling.
In some cases, high-tech cameras or drones that incorporate GPS technology into their workflow may automatically provide spatial information for scaling your model. No need to worry about the manual process performed here in that case!
In order to scale your model, you need to pick precise points which you are measuring between. This can be done either from the model itself or from individual pictures. Simply right click and select Add Marker
Each marker will have a name (point 1, point 2, etc). To tell Metashape the distance between these two points, you need to swap from the Workspace pane that you have been using so far to the Reference Pane. This can be done at the bottom left of your Metashape window.
Your reference pane will look something like this. Notice that it already contains the two markers we just created (point 1 and point 2). The images listed at the top will contain spatial information for your photos if your camera automatically includes GPS data.
To add the length between your two points, select both points in the reference pane, right click, and choose Create Scale Bar. This will add a scale bar to the bottom portion of the pane, where you can then type in the distance you measured between the points.
Note that the measurements are in meters, so be sure to convert appropriately!
Once you have added your scalebars, it's time to see how much error your model has! Simple go up to the reference toolbar (right above the list of images in the reference toolbar, and click the Update Transform button. Now, taking your measurements into account, the software will scale your model and tell you how much error it has.
I only estimated the measurements above, and you can see my error is pretty bad (18 cm)! In general, it is possible to have subcentimer errors, though this depends on the size of the model you are making. The larger the model, the larger error you should expect.
Now that your model is complete for now (scaled or not), you'll want to share it!
Metashape offers a variety of ways to share your model by going to File --> Export --> Export Model. Here are a few of the most common:
Wavefront OBJ (.obj): One of the most commonly used 3D mesh file types, it is used for sharing models using 3D software such as Meshlab and Cloudcompare. If you want to share your models on the online 3D model presentation platform Sketchfab, definitely export in this format for uploading.
Check out the beginning of this tutorial for information on how to upload your model into Sketchfab
3D pdf (.pdf): A pdf, but in 3D! Anyone using Adobe Reader will be able to view your model straight from their computer. A good choice for mass distribution when others might not have the technical skills to open an obj.
3D printing (.stl): Want to prepare your model for 3D printing? .STL is the file type used by most 3D printers (check out the BC 3D printing page here).
That's it for this introduction to processing 3D models! A future tutorial will go through the process of creating full 360 degree models from photos using chunks but for now we leave you with this example model! Meanwhile, check out the Metashape documentation for more information!
Want to see more models made at Boston College? Check out our Sketchfab Collection!
Explore one of the databases discussed in the presentation and download an historical map of interest (or use one of your own if you already have one!). Then, using the MapWarper online tool, georeference your map with at least 4 points.
David Rumsey Map Collection ()
Library of Congress Map Collection ()
USGS Historical Topographic Map Explorer ()
To look at your data against an historical Boston redlining map, we recommend using one of the from the Boston Public library.
Georeferenced version of the redlining map:
Constellate, the new text and data analytics service from JSTOR and Portico is a platform for learning and performing text analysis, building datasets, and sharing analytics course materials. The platform provides value to users in three core areas -- they can teach and learn text analytics, build datasets from across multiple content sources, and visualize and analyze their datasets.
Text analysis begins with a research question or curiosity and involves the use of digital tools and one’s own analytical skills to explore texts, be they literary works, historical documents, journal articles, legal briefs, transcribed interviews, or tweets, and is used in a wide variety of disciplines. Approaches can be quantitative and qualitative, and tools can range from coding and scripting languages to "out of the box" platforms like Voyant and Lexos.
Text mining (a term used more in the humanities), data mining (a term used more in the sciences and social sciences), and web scraping are techniques that use coding, scripting, and "out of the box" tools to gather text and create a corpus (or dataset).
In this two-part exercise, you will dive straight into Voyant to get a sense of how the tool works and what the text analysis process can look like. In part one, you will learn how to copy and paste text into Voyant and see how different Voyant tools work together. In part two, you will learn how to upload a text file to Voyant and a little about preparing a text for text analysis.
1.) Copy Martin Luther King Jr.'s "I Have a Dream" speech from this site. (If the site is down, search for a full version of the speech.) Make sure only to copy the speech and not other text on the webpage. (Incorporating any other text will impact the results.)
2.) Go to Voyant
3.) Paste the text into the "Add Text" box and click "Reveal":
The results should look something like this:
4.) To get a sense of how Voyant works, click on the word "freedom" in the Cirrus [A], or word cloud, then scroll through the Reader [B] and notice how the word "freedom" is highlighted throughout.
In Trends [C], notice how the line graph only shows "freedom." Now click on one of the line graph points and see how the Contexts [D] changes to show the context in which "freedom" appears throughout the speech. (It should look like the below).
Note: If you click the question mark in the upper right corner of each tool, e.g., Cirrus, you will get an explanation of that specific one.
Text analysis is more often associated with working with a large corpus (for example, all the works of a single author) or an enormous one (for example, all fiction publications from 1800-1900). In the case of a smaller corpus, a single speech being particularly small, using a text analysis tool like Voyant can facilitate close reading and is especially good for examining structure and word usage.
1.) Download the following text file. (It's The Complete Works of Shakespeare in Project Gutenberg.) It will likely download to your download folder or desktop.
2.) Go to Voyant again to launch a new instance and upload the text file by clicking on "Upload," navigating to the downloaded file, and select it. It will automatically "reveal" and will look something like the following:
Take a minute to look the Voyant instance over. Notice words like "shall, "hath," and any special characters in the word cloud. Notice the Project Gutenberg "boilerplate text" in the Reader. In Trends, notice that the horizontal line has numbers, and notice that as you scroll down in Contexts, the name in the Document column comes from the text file name and never changes.
3.) Now take a look at this Voyant instance (also seen in the image below) that contains The Complete Works of Shakespeare as well. This time the text was prepared prior to it being uploaded.
Notice that words like "shall, "hath," and special characters are no longer in the word cloud. This is because stopwords were applied to remove them. Notice that the boilerplate text is gone. This is because it was deleted from the text file before being uploaded. In Trends, the play names can now be seen below the horizontal line, and they can also be seen in Contexts' Document column. This is because the file, which contained all of Shakespeare's plays, was cut up into individual text files, one for each play. The files were also renamed to represent the play titles. The sonnets, which were also in the text, were removed so that the instance would be solely focused on plays.
As you explore Voyant, keep in mind that it, and all text analysis tools, do not do the analysis. It provides ways into texts that enable users to come to conclusions based on their own knowledge and analytical skills.
For this exercise, you will be given an imaginary research topic and questions.
Imagine you are studying Frederick Douglass' rhetorical arguments against slavery and you notice a lot of mentions of family. This sparks your curiosity and makes you wonder: How Douglass evokes the idea of family in his arguments? How much and when does he mention it? What rhetorical purpose might these mentions serve? Does Douglass more often talk about family in the context of slaveholders or slaves?
You decide to focus on words like wife, mother, husband, father, child, baby, infant, family, and parent with the understanding that you can expand your list later.
To get started, you need to create your corpus. You will be acquiring the text from Project Gutenberg, which has thousands of texts covering a range of genres and topics. (There are numerous other text sources, some of which can be found on this text repositories list.)
If you want to skip this part of the process and go straight to the Working in Voyant section, you can download the prepared text files below. (You will need to unzip the file to upload the individual text files to Voyant.)
Go to Project Gutenberg and search "Frederick Douglass" (or see his works here). The results should look like the following:
From this list, Narrative of the Life of Frederick Douglass, an American Slave; My Bondage My Freedom; Abolition Fanaticism in New York; and Collected Articles of Frederick Douglass will be used.
a.) Now you will "scrape" (or extract) the text from the site. This begins with getting the text URLs (web addresses). To do this in Project Gutenberg, you go to each text's landing page, select "Plain Text UTF-8, and copy the URL.
For Example, here is the Narrative of the Life of Frederick Douglass landing page:
And here is the URL that you get after clicking on Plain Text UTF-8:
For a shortcut, you can get all of the URLs here:
b.) Go to Lexos. Copy and paste the URLs into the "Scrape" box on the Lexos landing page (also the "Upload" page). Click the "Scrape" button. (It should only take a few seconds since the texts aren't that big.) When it is done, you will see the texts in the "Upload List" box.
Now you will prepare the text, this means things like getting rid of punctuation, making all the text lower case, using lemmas, getting rid of tags, cutting up the text if one wants it to be divided into smaller units, and any other choices one might make.
a.) To prepare the text, click on "Prepare" in the top navigation menu (see image below) and then click on "Scrub." (A little bit below, you will find an explanation of the preparation choices being made.)
b.) On the Scrub page, you can make multiple decisions that will affect the text. It can be necessary to experiment with how you scrub your text. For now, select "Make Lowercase," "Remove Digits," "Scrub Tags," "Remove Punctuation," and "Keep Hyphens."
It should look like this:
Click "Apply."
c.) Now you are going to apply lemmas. It is important that you scrub the text before applying them. Doing so with the settings being used will get rid of the punctuation, which is necessary for some of the lemmas to work.
Cut and paste these lemmas into the "Lemmas" box:
It should look like this:
Click "Apply."
"Make lowercase" made all characters lowercase. This choice is best when using case-sensitive tools, which treat capitalized and lowercase words differently. For example, in certain tools, words capitalized at the beginning of sentences are seen as being different from the same word appearing in lowercase within the sentence.
"Remove Digits" and "Scrub Tags" got rid of unnecessary digits or distracting HTML tags that might be in the text.
"Remove Punctuation" and "Keep Hyphens" got rid of punctuation that might impact the effectiveness of the lemmas being used but kept hyphenated words intact.
Lemmas group together words so they can be analyzed as a single item. After Lemmatisation, it is easier to count, search, and categorize the grouped words. It is also easier to create stopwords and white lists, as only one version of a word will need to be added.
For example, "wife," "wifes," and "wives" will all appear as "wives." Note that "wifes" was "wife's." The first scrub application got rid of the apostrophe. Were the apostrophe not removed before applying lemmas, the lemmatization process for that word would not have worked.
b.) When it is done, click the "Download" button, and a zip file should download to your computer. Find the file and open it. You should see a folder with individual text files. (They are likely in your download folder or on your desktop.)
c.) Open each file to look for text not related to Douglass's work, e.g., Project Gutenberg boilerplate information at the beginning (pictured below) and end of the text.
This is when you can also decide whether to delete paratextual information, e.g., introductions, prefaces, table of contents, indexes, etc. Choosing whether or not to keep this kind of information is part of the intellectual decision-making that goes into text analysis.
d.) When you are done. Save your files.
e.) Rename the files (by clicking on each file name) to clarify which file is which text. Below are the recommended names, "abolition," "articles," "bondage," and "narrative." They use the first keyword from each title. Once this is done, you are ready to upload your files to Voyant.
For this second half of the tutorial, you will be introduced to some of Voyant's functions that will help you explore the proposed research questions. Here again, is the premise and research question:
Imagine you are studying Frederick Douglass' rhetorical arguments against slavery and you notice mentions in various works that evoke the idea of family. This sparks your curiosity and makes you wonder: How Douglass evokes the idea of family in his arguments? How much and when does he mention family? What rhetorical purpose might these mentions serve? Does Douglass more often talk about the family in the context of slaveholders or slaves?
Going forward, you are encouraged to follow the various steps presented and to explore on your own.
Launch a new Voyant instance, click on the "Upload" button, navigate to and select the edited Douglass files. When selecting the files, it should look something like this:
The results should look something like this:
The particular tools and layout you initially see is called the "default skin." It displays the tools:
a.) Notice that there are other view options within each box. For example, "Cirrus" also has "Terms" and "Links." Take a moment to explore the tools and their various options.
b.) Notice that when you hover to the left of any of the question marks (even the one in the blue field at the very top right of the page) a toolbar of icons appears:
These provide access to a range of options and functionalities: The arrow icon [a] allows you to export a URL or embed code for a specific tool or the entire project. It also allows you to export images. The window icon [b] is where you go to change the tool to a different one. The switch icon [c] is where you go to define options for that specific tool. For example, it is where you go to add stopwords, create categories, and change fonts. The question mark provides information about that specific tool.
c.) Take a little time to explore this toolbar as it is key to using Voyant effectively, and we will be using it quite a bit below.
Stopwords are words that you don't want to incorperate in the results you see in certain tools, i.e., Cirrus and Summary's "most frequent words in the corpus." When applying stopwords, you are not deleting them; they are just not visible.
Choosing stopwords is part of the intellectual decision-making that goes into text analysis. For example, in the context of the research question posed here, one could decide to stop the words "slavery," "masters," and "slaves" since those terms are pervasive and are understood to be there. Getting rid of words that are considered inconsequential, at least within the context of the research question (e.g., "like" and "mr"), can also be helpful. By stopping these words, other words will become more visible and might inspire new ideas and inform the analysis.
a.) To add stopwords, click on the switch or "define options" icon in the Cirrus tool (or in any other tool).
b.) Click "Edit List" next to the stopwords dropdown menu.
c.) Add the words, "slaves," "slavery," "masters," "mr," and "like," putting each one on a new line.
Save them and click "Confirm." You should see a change in the word cloud. If one of the words does not disappear, add it again. There could be a typo issue.
If you only wanted to apply stopwords in that particular tool, you would uncheck "apply globally."
Please note: When your text is first loaded in Voyant, the app automatically applies stopwords. You can turn this off by selecting "None" in the stopword dropdown menu. You can also remove stopwords from the list simply by deleting them, saving, and confirming the changes.
A white list is essentially the opposite of stopwords. It involves creating a list of words that you only want to see in the Cirrus results.
a.) To create a white list, click on the switch or "define options" icon in the Cirrus tool (or any other tool).
b.) Click "Edit List" next to the white list dropdown menu.
c.) Add the words: mothers, fathers, children, babies, infants, husbands, wives, families, parents, putting each one on a new line and save them and click "Confirm."
The results should look something like the below. (If one of the words does not disappear, add it again. There could be a typo issue.)
You can turn off the white list by selecting "None" in the white list dropdown menu. You can also remove words by deleting them from the list and resaving and confirming the changes.
You can also create categories that group words. They can be applied in many but not all tools.
a.) To create a category, click the switch or "define options" icon in the Cirrus tool (or any other tool).
b.) Click the "Edit" next to the Categories dropdown menu.
When you open up Categories, you will see that Voyant has two default ones, "positive" and "negative." To add a new category, click "Add Category" [b] and give it a title such as "family." To add terms to that list, search for them in the search box [c], and when they appear in the Terms box [d], drag them to the new categories list. To remove a term from a list, select the word and then click "Remove Selected Term" [a].
c.) Create a new category using "family" as the name and add the terms "mothers," "fathers," "children," "babies," "infants," "husbands," "wives," "families," and "parents." (It should look like the image above.)
After creating your category, you can apply the category to a specific tool. In the tool's search box, search the list name with an @ at the beginning, for example, @family. (Do not leave a space between the @ and the name). The result will be only ones containing terms in that category.
d.) In the Contexts tool, search @family in the search box and explore the results:
The following will show you how to change out tools. (To learn about the many Voyant tools to choose from see the tools list.)
a.) In the Trends (or any other tool), click on the window or "choose tool" icon.
b.) Click on "Visualization" and then "MicroSearch" and the tool should appear.
As Voyant describes MicroSearch, "each document in the corpus is represented as a vertical block where the height of the block indicates the relative size of the document compared to others in the corpus. The location of occurrences of search terms is located as red blocks...Multiple search terms are collapsed together." In the tool, you search the term(s) you want to see appear in the visualization.
c.) Search "children" in the search box.
You can choose individual texts that you want to visualize by selecting and deselecting them. For example, you can choose only to use Douglass' Narrative of a Slave and Articles.
a.) To select the texts, go to Summary (on the lower left) and click "Documents." Then select or deselect texts. Here is where you also can modify texts, meaning you can delete them and uploading new ones. To make these changes, click "Modify."
You can save or share an entire Voyant instance or individual tools by exporting the URL. (Exporting a tool launches that tool in its own window.)
To export the URL of an entire Voyant instance or a specific tool (see this Cirrus white list example), click on the arrow or "Export URL" icon in the very top right corner of the Voyant instance. Then click "Export," and a new window will launch. Copy and keep the URL for that window. To get the embed code, select the option "an HTML snippet for embedding this view..." and then click "Export." If you make changes to your project, you need to re-export to get a URL or embed code that reflects those changes.
Exporting the entire Voyant instance options:
Exporting specific tools options (notice that you can also export images. Select "export a PNG image..."):
You have completed Exercise Two and the tutorial. If you haven't already, now is a good time to explore the Voyant user's guide. You are also encouraged to experiment with texts that are part of your own research interests.
Having gained familiarity with the basic functionality of RealityComposer, challenge yourself to create a simple AR scene with behaviors using prebuilt models or some imported from Sketchfab. Some possible scenarios with the basic models:
A pool table/bowling lane where the ball flies into the other balls when tapped
"kicking" a football through the uprights
A rocket ship flies into space with sounds (and then slowly comes back to its landing spot)
Whatever else you want!
Download the following 2 datasets to your computer:
Hands on exercise:
For this exercise, you will work with the 2019 Massachusetts crime data by city. The spreadsheet includes one table and some extra formatting.
1. Connect the excel file to Tableau.
2. Drag over the “Crime 2019” table to the area that says “Drag sheets here”. Look at your data table. Does it look right to you? Consider the table format, header, footer, and variable type.
3. Use the data interpreter function in Tableau, review Data Interpreter results in the preview area.
4. Check out the data table. How does it look now? Consider the table format, header, footer, and variable type.
5. Change the variable names to better reflect what they are. E.g., crime total 1 to Violent crime, crime total 2 to Property crime. (provide a screenshot)
6. What potential data visualization questions can you ask from this data set?
This is a guide to installing and running Tableau Desktop on your personal computer. Please note that all workstations in the Digital Studio (on the second floor of O'Neill Library) already have Tableau Desktop installed.
Tableau has versions for both Windows and Mac. Detailed system requirements for Tableau here: https://public.tableau.com/en-us/s/download.
Tableau Desktop is a visualization software used to create data visualizations and interactive dashboards. If you are a student, instructor, or researcher, you can request a free, renewable, one-year license for Tableau Desktop through Tableau Academic Program. For instructor and researcher, the individual license is valid for one year and can be renewed each year if you are teaching Tableau in the classroom or conducting non-commercial academic research; The student license expires after one year; you can request a new license each year as a full-time student.
If you are a member of the public, please consider using Tableau Public instead, which is the free version of Tableau Desktop.
Here are the steps for students: (Installation process for instructors and researchers is similar. Just follow the instructions on the screen.)
Step 1: Go to https://www.tableau.com/academic/students (Here is the link for instructors.)
Tableau Student
Step 2: Click on Get Tableau for Free.
Step 3: A web form will pop up. Complete all of the requested information, using your official BC email address when you fill out the form.
Step 4: Next, click on Verify Student Status.
Step 5: You will receive an email with a product key and link to download the software.
Step 6: Click on Download Tableau Desktop from your email and copy the product key.
Step 7: Follow the installation instructions to install Tableau to your computer.
Step 8: Activate your Tableau with your license key.
For instructor and researcher, click on Request Individual License on the screen.
The pop up request form is similar to the student one described above, but additionally asks "I plan to use Tableau Desktop for..." Under that popup, you can select "Teaching only," "Noncommercial academic research only," or both. Select the option that fits your needs best. You do not need to be an instructor to get a Tableau copy.
Tableau Public
Following are the general steps to download Tableau Public:
Go to Tableau Public Download Page: public.tableau.com
Enter your email address and click "Download the App".
Once the installation file has been downloaded to your computer, run it and follow the prompts to install Tableau on your Mac or PC.